Objective: Otoscopy is a key clinical examination used by multiple healthcare providers but training and testing of otoscopy skills remain largely uninvestigated. Simulator-based assessment of otoscopy skills exists, but evidence on its validity is scarce. In this study, we explored automated assessment and performance metrics of an otoscopy simulator through collection of validity evidence according to Messick's framework.

Methods: Novices and experienced otoscopists completed a test program on the Earsi otoscopy simulator. Automated assessment of diagnostic ability and performance were compared with manual ratings of technical skills. Reliability of assessment was evaluated using Generalizability theory. Linear mixed models and correlation analysis were used to compare automated and manual assessments. Finally, we used the contrasting groups method to define a pass/fail level for the automated score.

Results: A total of 12 novices and 12 experienced otoscopists completed the study. We found an overall -coefficient of .69 for automated assessment. The experienced otoscopists achieved a significantly higher mean automated score than the novices (59.9% (95% CI [57.3%-62.6%]) vs. 44.6% (95% CI [41.9%-47.2%]),  < .001). For the manual assessment of technical skills, there was no significant difference, nor did the automated score correlate with the manually rated score (Pearson's  = .20,  = .601). We established a pass/fail standard for the simulator's automated score of 49.3%.

Conclusion: We explored validity evidence supporting an otoscopy simulator's automated score, demonstrating that this score mainly reflects cognitive skills. Manual assessment therefore still seems necessary at this point and external video-recording is necessary for valid assessment. To improve the reliability, the test course should include more cases to achieve a higher G-coefficient and a higher pass/fail standard should be used.

Download full-text PDF

Source
http://dx.doi.org/10.1177/00034894241288434DOI Listing

Publication Analysis

Top Keywords

otoscopy skills
12
automated assessment
12
experienced otoscopists
12
validity evidence
8
otoscopy simulator
8
novices experienced
8
otoscopists completed
8
otoscopy
6
automated
6
assessment
5

Similar Publications

Mismatch negativity in children with developmental Dyslexia.

Int J Pediatr Otorhinolaryngol

January 2025

Level IV, Department of Health and Human Communication, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, Rio Grande do Sul, Brazil. Electronic address:

Objective: To describe and compare the latencies and amplitudes of Mismatch Negativity between children with and without Developmental Dyslexia.

Methods: Cross-sectional and comparative study, consisting of a study group of 52 children with Developmental Dyslexia and a control group of 52 children with typical development, matched by age and sex, aged between 9 years and 11 years and 11 months of both sexes. All participants underwent Otoscopy, Acoustic Immittance Measurements, Pure Tone Audiometry, Speech Audiometry, Brainstem Auditory Evoked Potential and Mismatch Negativity.

View Article and Find Full Text PDF

Background: Otoscope examinations are a fundamental skill in pediatric care, crucial for diagnosing and managing ear conditions such as otitis media. Traditional training methods for pediatric otoscopic examination often rely on adult standardized patients (SPs) or simulated models, which may not be adequate for pediatric examinations.

Objectives: This study evaluates the feasibility and effectiveness of use of children as SPs in Objective Structured Clinical Examinations (OSCEs) to assess medical students' competency in pediatric otoscopy.

View Article and Find Full Text PDF
Article Synopsis
  • * It involved a randomized controlled trial with 10 resident doctors, comparing traditional teaching methods with those that incorporated otological endoscopy, assessing outcomes through exams and feedback.
  • * Results showed that the experimental group scored significantly higher in theoretical and operational tests, indicating that teaching enhanced with endoscopic techniques improves learning and student confidence.
View Article and Find Full Text PDF

Gathering Validity Evidence for a Simulation-Based Test of Otoscopy Skills.

Ann Otol Rhinol Laryngol

February 2025

Department of Otorhinolaryngology, Head & Neck Surgery & Audiology, Rigshospitalet, Copenhagen, Denmark.

Objective: Otoscopy is a key clinical examination used by multiple healthcare providers but training and testing of otoscopy skills remain largely uninvestigated. Simulator-based assessment of otoscopy skills exists, but evidence on its validity is scarce. In this study, we explored automated assessment and performance metrics of an otoscopy simulator through collection of validity evidence according to Messick's framework.

View Article and Find Full Text PDF

Background/objectives: New teaching methods are warranted to meet the demand for increased flexibility in medical education while making optimal use of the limited resources of educators. The COVID-19 pandemic forced universities to resort to online-only teaching, even for training of psychomotor skills. The objectives of this study were: (I) to investigate the performance of students without previous experience in ear, nose and throat (ENT) examination after completing an asynchronous online teaching course in an objective standardized clinical examination (OSCE) and (II) to evaluate the degree of over- and underestimation of their abilities.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!