The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliability. Results ranged from 92% to 96%. The third study, by Sideras, used level of agreement for reliability analyses. Results ranged from 57% to 100%. Findings from each of these studies provided evidence supporting the validity of the LCJR for assessing clinical judgment during simulated patient care scenarios. This article provides extensive information about psychometrics and appropriate use of the LCJR and concludes with recommendations for further psychometric assessment and use of the LCJR.

Download full-text PDF

Source
http://dx.doi.org/10.3928/01484834-20111130-03DOI Listing

Publication Analysis

Top Keywords

clinical judgment
12
interrater reliability
12
reliability validity
8
lasater clinical
8
judgment rubric
8
three approaches
8
reliability
5
lcjr
5
assessing reliability
4
validity lasater
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!