A rater's overall impression of a ratee's essay (or other assessment) can influence ratings on multiple criteria to yield excessively similar ratings (halo effect). However, existing analytic methods fail to identify whether similar ratings stem from homogeneous criteria (true halo) or rater bias (illusory halo). Hence, we introduce and test a mixture Rasch facets model for halo effects (MRFM-H) that distinguishes true halo versus illusory halo effects to classify normal versus halo raters. In a simulation study, when raters assessed enough ratees, MRFM-H accurately identified halo raters. Also, more rating criteria increased classification accuracy. A simpler model ignored halo effects and biased the parameters for evaluation criteria and for rater severity but not for ratee assessments. MRFM-H application to three empirical datasets showed that (a) experienced raters were subject to illusory halo effects, (b) illusory halo effects were less likely with greater numbers of criteria, and (c) more informative survey responses were more distinguishable from less informative responses.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3758/s13428-021-01721-3 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!