Background: Virtual patients (VPs) are increasingly used to train clinical reasoning. So far, no validated evaluation instruments for VP design are available.

Aims: We examined the validity of an instrument for assessing the perception of VP design by learners.

Methods: Three sources of validity evidence were examined: (i) Content was examined based on theory of clinical reasoning and an international VP expert team. (ii) The response process was explored in think-aloud pilot studies with medical students and in content analyses of free text questions accompanying each item of the instrument. (iii) Internal structure was assessed by exploratory factor analysis (EFA) and inter-rater reliability by generalizability analysis.

Results: Content analysis was reasonably supported by the theoretical foundation and the VP expert team. The think-aloud studies and analysis of free text comments supported the validity of the instrument. In the EFA, using 2547 student evaluations of a total of 78 VPs, a three-factor model showed a reasonable fit with the data. At least 200 student responses are needed to obtain a reliable evaluation of a VP on all three factors.

Conclusion: The instrument has the potential to provide valid information about VP design, provided that many responses per VP are available.

Download full-text PDF

Source
http://dx.doi.org/10.3109/0142159X.2014.970622DOI Listing

Publication Analysis

Top Keywords

clinical reasoning
12
validity instrument
8
expert team
8
free text
8
exploring validity
4
validity reliability
4
reliability questionnaire
4
questionnaire evaluating
4
evaluating virtual
4
virtual patient
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!