Rationale: The ability to acquire the best evidence efficiently is an important competency for busy health-care professionals who must make decisions quickly.
Aims And Objectives: We aimed to develop and validate a scale for measuring evidence-searching capability.
Methods: We first developed a scale for measuring evidence-searching capability by using the modified Delphi technique. Seven experts commented on a draft 33-item scale on a 5-point scale. All items rated less than three by any expert were removed. The items were modified or merged considering experts' feedback. When all items were rated greater than or equal to 3 by all experts with an interquartile range of less than or equal to 1, a consensus on the scale was achieved with the content validity constructed. We performed a pilot test and a formal test, and evaluated the inter-rater, intra-rater, and internal reliability by calculating the intra-cluster correlation coefficient (ICC), kappa correlation coefficient, and Cronbach's α.
Results: We developed a scale consisted of 15 items and a global rating score, with an average scale-content validity index of 0.98. In the pilot test, the ICC for inter-rater and intra-rater reliability was 0.91 and 1, respectively. The Cronbach's α was 0.90. In the formal test, the ICC for inter-rater reliability ranged from 0.61 to 1 and the weighted kappa correlation coefficient ranged from 0.27 to 1, with a Cronbach's α of 0.97.
Conclusions: This study is the first to develop and validate a scale for measuring evidence-searching skills through a systematic approach. The scale is composed of 15 items and a global rating score that can be easily used in objective assessment of knowledge-acquiring ability.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/jep.13153 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!