Background: The outcome of assessments is determined by the standard-setting method used. Standard setting is the process of deciding what is good enough. A cutoff score of 50% was commonly used in dental schools in Malaysia. This study aims to compare the conventional, norm-referenced, and modified-Angoff standard-setting methods.
Methods: The norm-referenced method of standard setting was applied to the real scores of 40 final-year dental students on a multiple-choice question (MCQ), a short answer question (SAQ), and an objective structured clinical examination (OSCE). A panel of 10 judges set the standard using the modified-Angoff method for the same paper in one sitting. One judge set the passing score of 10 OSCE questions after 2 weeks. A comparison of the grades and pass/fail rates derived from the absolute standard, norm-referenced, and modified-Angoff methods was made. The intra-rater and inter-rater reliabilities of the modified-Angoff method were assessed.
Results: The passing rate for the absolute standard was 100% (40/40), for the norm-referenced method it was 62.5% (25/40), and for the modified-Angoff method it was 80% (32/40). The modified-Angoff method had good inter-rater reliability of 0.876 and excellent test-retest reliability of 0.941.
Conclusion: There were significant differences in the outcomes of these three standard-setting methods, as shown by the difference in the proportion of candidates who passed and failed the assessment. The modified-Angoff method was found to have good reliability for use with a professional qualifying dental examination.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1002/jdd.12600 | DOI Listing |
BMC Med Educ
November 2024
Education, Learning and Assessment, Royal Australian College of Physicians, Executive General Manager, Sydney, NSW, Australia.
Background: The assessment of team performance within large-scale Interprofessional Learning (IPL) initiatives is an important but underexplored area. It is essential for demonstrating the effectiveness of collaborative learning outcomes in preparing students for professional practice. Using Kane's validity framework, we investigated whether peer assessment of student-produced videos depicting collaborative teamwork in an IPL activity was sufficiently valid for decision-making about team performance, and where the sources of error might lie to optimise future iterations of the assessment.
View Article and Find Full Text PDFCurr Pharm Teach Learn
June 2024
University of Florida College of Pharmacy, 1225 Center Drive, Gainesville, FL 32610, United States of America. Electronic address:
Background And Purpose: To describe one institution's approach to transformation of high-stakes objective structure clinical examinations (OSCEs) from norm-referenced to criterion-referenced standards setting and to evaluate the impact of these changes on OSCE performance and pass rates.
Educational Activity And Setting: The OSCE writing team at the college selected a modified Angoff method appropriate for high-stakes assessments to replace the two standard deviation method previously used. Each member of the OSCE writing team independently reviewed the analytical checklist and calculated a passing score for active stations on OSCEs.
J Health Popul Nutr
December 2023
UNICEF, Data and Analytics Section, 3 UN Plaza, New York, NY, 10017, USA.
Background: Standards of early childhood development (ECD) are needed to determine whether children living in different contexts are developmentally on track. The Early Childhood Development Index 2030 (ECDI2030) is a population-level measure intended to be used in household surveys to collect globally comparable data on one of the indicators chosen to monitor progress toward target 4.2 of the Sustainable Development Goals: The proportion of children aged 24-59 months who are developmentally on track in health, learning and psychosocial well-being.
View Article and Find Full Text PDFBMC Med Educ
September 2023
Imperial College School of Medicine, Imperial College London, London, UK.
Background: Automated Item Generation (AIG) uses computer software to create multiple items from a single question model. There is currently a lack of data looking at whether item variants to a single question result in differences in student performance or human-derived standard setting. The purpose of this study was to use 50 Multiple Choice Questions (MCQs) as models to create four distinct tests which would be standard set and given to final year UK medical students, and then to compare the performance and standard setting data for each.
View Article and Find Full Text PDFBr J Clin Pharmacol
October 2023
Faculty of Medicine and Health, Sydney Medical School, The University of Sydney, Sydney, New South Wales, Australia.
Aims: The UK Prescribing Safety Assessment was modified for use in Australia and New Zealand (ANZ) as the Prescribing Skills Assessment (PSA). We investigated the implementation, student performance and acceptability of the ANZ PSA for final-year medical students.
Methods: This study used a mixed-method approach involving student data (n = 6440) for 2017-2019 (PSA overall score and 8 domain subscores).
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!