Objective: The quality of clinical care is often assessed by retrospective examination of case-notes (charts, medical records). Our objective was to determine the inter-rater reliability of case-note audit.

Methods: We conducted a systematic review of the inter-rater reliability of case-note audit. Analysis was restricted to 26 papers reporting comparisons of two or three raters making independent judgements about the quality of care.

Results: Sixty-six separate comparisons were possible, since some papers reported more than one measurement of reliability. Mean kappa values ranged from 0.32 to 0.70. These may be inflated due to publication bias. Measured reliabilities were found to be higher for case-note reviews based on explicit, as opposed to implicit, criteria and for reviews that focused on outcome (including adverse effects) rather than process errors. We found an association between kappa and the prevalence of errors (poor quality care), suggesting alternatives such as tetrachoric and polychoric correlation coefficients be considered to assess inter-rater reliability.

Conclusions: Comparative studies should take into account the relationship between kappa and the prevalence of the events being measured.

Download full-text PDF

Source
http://dx.doi.org/10.1258/135581907781543012DOI Listing

Publication Analysis

Top Keywords

inter-rater reliability
12
reliability case-note
12
case-note audit
8
systematic review
8
kappa prevalence
8
inter-rater
4
case-note
4
audit systematic
4
review objective
4
objective quality
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!