Meaningful Comparisons With Ordinal-Scale Items
| Authors |
|
|---|---|
| Publication date | 2022 |
| Journal | Collabra: Psychology |
| Article number | 38594 |
| Volume | Issue number | 8 |
| Number of pages | 16 |
| Organisations |
|
| Abstract |
Ordinal-scale items—say items that assess agreement with a proposition on an ordinal rating scale from strongly disagree to strongly agree—are exceedingly popular in psychological research. A common research question concerns the comparison of response distributions on ordinal-scale items across conditions. In this context, there is often a lingering question of whether metric-level descriptions of the results and parametric tests are appropriate. We consider a different problem, perhaps one that supersedes the parametric-vs-nonparametric issue: When is it appropriate to reduce the comparison of two (ordinal) distributions to the comparison of simple summary statistics (e.g., measures of location)? In this paper, we provide a Bayesian modeling approach to help researchers perform meaningful comparisons of two response distributions and draw appropriate inferences from ordinal-scale items. We develop four statistical models that represent possible relationships between two distributions: an unconstrained model representing a complex, non-ordinal relationship, a nonparametric stochastic-dominance model, a parametric shift model, and a null model representing equivalence in distribution. We show how these models can be compared in light of data with Bayes factors and illustrate their usefulness with two real-world examples. We also provide a freely available web applet for researchers who wish to adopt the approach.
|
| Document type | Article |
| Note | With supplementary files |
| Language | English |
| Published at | https://doi.org/10.1525/collabra.38594 |
| Downloads |
collabra_2022_8_1_38594
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |
