Reliability of scores from psychological or educational assessments provides important information regarding the precision of measurement. The reliability of scores is however population dependent and may vary across groups. In item response theory, this population dependence can be attributed to differential item functioning or to differences in the latent distributions between groups and needs to be accounted for when estimating the reliability of scores for different groups. Here, we introduce group-specific and overall reliability coefficients for sum scores and maximum likelihood ability estimates defined by a multiple group item response theory model. We derive confidence intervals using asymptotic theory and evaluate the empirical properties of estimators and the confidence intervals in a simulation study. The results show that the estimators are largely unbiased and that the confidence intervals are accurate with moderately large sample sizes. We exemplify the approach with the Montreal Cognitive Assessment (MoCA) in two groups defined by education level and give recommendations for applied work.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9313586PMC
http://dx.doi.org/10.1111/bmsp.12269DOI Listing

Publication Analysis

Top Keywords

item response
12
response theory
12
reliability scores
12
confidence intervals
12
reliability coefficients
8
multiple group
8
group item
8
reliability
5
coefficients multiple
4
item
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!