Background And Purpose: Efforts to establish support for the reliability of quality indicator data are ongoing. Most patients typically receive recommended care, therefore, the high-prevalence of event rates make statistical analysis challenging. This article presents a novel statistical approach recently used to estimate inter-rater agreement for the National Database for Nursing Quality Indicator pressure injury risk and prevention data.

Methods: Inter-rater agreement was estimated by prevalence-adjusted kappa values. Data modifications were also done to overcome the convergence issue due to sparse cross-tables.

Results: Cohen's kappa values suggested low reliability despite high levels of agreement between raters.

Conclusion: Prevalence-adjusted kappa values should be presented with Cohen's kappa values in order to evaluate inter-rater agreement when the majority of patients receive recommended care.

Download full-text PDF

Source
http://dx.doi.org/10.1891/1061-3749.27.2.152DOI Listing

Publication Analysis

Top Keywords

inter-rater agreement
16
kappa values
16
quality indicator
8
receive recommended
8
recommended care
8
prevalence-adjusted kappa
8
cohen's kappa
8
inter-rater
4
agreement estimates
4
estimates data
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!