Summated rating scales are ubiquitous in organizational research, and there are well-delineated guidelines for scale development (e.g., Hinkin, 1998). Nevertheless, there has been less research on the explicit selection of the response anchors. Constructing survey questions with equal-interval properties (i.e., interval or ratio data) is important if researchers plan to analyze their data using parametric statistics. As such, the primary objectives of the current study were to (a) determine the most common contexts in which summated rating scales are used (e.g., agreement, similarity, frequency, amount, and judgment), (b) determine the most commonly used anchors (e.g., strongly disagree, often, very good), and (c) provide empirical data on the conceptual distance between these anchors. We present the mean and standard deviation of scores for estimates of each anchor and the percentage of distribution overlap between the anchors. Our results provide researchers with data that can be used to guide the selection of verbal anchors with equal-interval properties so as to reduce measurement error and improve confidence in the results of subsequent analyses. We also conducted multiple empirical studies to examine the consequences of measuring constructs with unequal-interval anchors. A clear pattern of results is that correlations involving unequal-interval anchors are consistently weaker than correlations involving equal-interval anchors. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Download full-text PDF

Source
http://dx.doi.org/10.1037/apl0000444DOI Listing

Publication Analysis

Top Keywords

summated rating
12
rating scales
12
anchors
9
response anchors
8
equal-interval properties
8
unequal-interval anchors
8
correlations involving
8
selecting response
4
anchors equal
4
equal intervals
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!