Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Introduction: Multiple-choice questions (MCQs) have been recognized as reliable assessment tools, and incorporating clinical scenarios in MCQ stems has enhanced their effectiveness in evaluating knowledge and understanding. Item analysis is used to assess the reliability and consistency of MCQs, indicating their suitability as an assessment tool. This study aims to ensure the competence of graduates in serving the community and establish an examination bank for the surgery course.
Objective: This study aims to assess the quality and acceptability of MCQs in the surgery course at the University of Bisha College of Medicine (UBCOM).
Methods: A psychometric study evaluated the quality of MCQs used in surgery examinations from 2019 to 2023 at UBCOM in Saudi Arabia. The MCQs/items were analyzed and categorized for their difficulty index (DIF), discrimination index (DI), and distracter efficiency (DE) Fifth-year MBBS students undergo a rotation in the department and are assessed at the end of 12 weeks. The assessment includes 60 MCQs/items and written items. Data was collected and analyzed using SPSS version 24.
Results: A total of 189 students were examined across five test sessions, with 300 MCQ items. Student scores ranged from 28.33% to 90.0%, with an average score of 64.6%±4.35. The 300 MCQ items had a total of 900 distractors. The DIF was 75.3% for the items, and 63.3% of the items showed good discrimination. No items had negative points in terms of biserial correlation. The mean number of functional distractors per test item was 2.19±1.007, with 34% of the items having three functional distractors.
Conclusion: The psychometric indices used to evaluate the MCQs in this study were encouraging, with acceptable DIF, distractor efficiencies, and item reliability. Providing robust faculty training and capacity-building is recommended to enhance item development skills.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785735 | PMC |
http://dx.doi.org/10.7759/cureus.50441 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!