Background And Purpose: Efforts to establish support for the reliability of quality indicator data are ongoing. Most patients typically receive recommended care, therefore, the high-prevalence of event rates make statistical analysis challenging. This article presents a novel statistical approach recently used to estimate inter-rater agreement for the National Database for Nursing Quality Indicator pressure injury risk and prevention data.
Methods: Inter-rater agreement was estimated by prevalence-adjusted kappa values. Data modifications were also done to overcome the convergence issue due to sparse cross-tables.
Results: Cohen's kappa values suggested low reliability despite high levels of agreement between raters.
Conclusion: Prevalence-adjusted kappa values should be presented with Cohen's kappa values in order to evaluate inter-rater agreement when the majority of patients receive recommended care.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1891/1061-3749.27.2.152 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!