The Script Concordance (SC) test is designed to measure the organization of knowledge that allows interpretation of data in clinical reasoning. An originality of the test is that answer keys use an aggregate scoring method based on answers given by a panel of experts. Previous studies have shown that the SC test has good construct validity. This study, done in urology, explores (1) the stability of the construct validity of the test across two different linguistic and learning environments and (2) the effect of the use of experts who belong to different environments. An 80-item SC test was administered to participants from a French and a Canadian university. Two levels of experience were tested: 25 residents in urology (11 from the French university and 14 from the Canadian university) and 23 students (15 from the French faculty, eight from the Canadian faculty). Reliability analysis was studied with Cronbach's alpha coefficient. Scores between groups were compared by analysis of variance. Reliability coefficient of the 80 items test was 0.794 for the French participants and 0.795 for the Canadian participants. Scores increased with clinical experience in urology in the two sites. Candidates obtained higher scores when correction was done using the answer key provided by the experts from the same country. These data support the stability of the construct validity of the tool across different learning environments.

Download full-text PDF

Source
http://dx.doi.org/10.1080/0142159021000012599DOI Listing

Publication Analysis

Top Keywords

learning environments
12
construct validity
12
clinical reasoning
8
script concordance
8
concordance test
8
test linguistic
8
stability construct
8
canadian university
8
test
7
stability clinical
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!