Purpose: Language comprehension in people with aphasia (PWA) is frequently evaluated using multiple-choice displays: PWA are asked to choose the image that best corresponds to the verbal stimulus in a display. When a nontarget image is selected, comprehension failure is assumed. However, stimulus-driven factors unrelated to linguistic comprehension may influence performance. In this study we explore the influence of physical image characteristics of multiple-choice image displays on visual attention allocation by PWA.
Method: Eye fixations of 41 PWA were recorded while they viewed 40 multiple-choice image sets presented with and without verbal stimuli. Within each display, 3 images (majority images) were the same and 1 (singleton image) differed in terms of 1 image characteristic. The mean proportion of fixation duration (PFD) allocated across majority images was compared against the PFD allocated to singleton images.
Results: PWA allocated significantly greater PFD to the singleton than to the majority images in both nonverbal and verbal conditions. Those with greater severity of comprehension deficits allocated greater PFD to nontarget singleton images in the verbal condition.
Conclusion: When using tasks that rely on multiple-choice displays and verbal stimuli, one cannot assume that verbal stimuli will override the effect of visual-stimulus characteristics.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5755551 | PMC |
http://dx.doi.org/10.1044/2017_JSLHR-L-16-0087 | DOI Listing |
PLoS One
January 2025
Faculty of Dentistry, PHENIKAA University, Hanoi, Vietnam.
Objectives: This study aims to evaluate the performance of the latest large language models (LLMs) in answering dental multiple choice questions (MCQs), including both text-based and image-based questions.
Material And Methods: A total of 1490 MCQs from two board review books for the United States National Board Dental Examination were selected. This study evaluated six of the latest LLMs as of August 2024, including ChatGPT 4.
Brain Inj
January 2025
Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia.
Introduction: Magnetic resonance imaging (MRI) has revolutionized our capacity to examine brain alterations in traumatic brain injury (TBI). However, little is known about the level of implementation of MRI techniques in clinical practice in TBI and associated obstacles.
Methods: A diverse set of health professionals completed 19 multiple choice and free text survey questions.
Curr Probl Diagn Radiol
January 2025
The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA. Electronic address:
The American Board of Radiology Core exam requires that trainees demonstrate knowledge of critical concepts across 12 domains spanning a range of imaging modalities and anatomic regions. Mobile apps have become popular components of medical and radiology education since the inception of smartphones. Numerous medical educational apps are accessible via smartphone devices and tablets, regardless of operating system, for medical training and learning purposes.
View Article and Find Full Text PDFCureus
December 2024
Medical Education, University of South Florida Morsani College of Medicine, Tampa, USA.
Background AI language models have been shown to achieve a passing score on certain imageless diagnostic tests of the USMLE. However, they have failed certain specialty-specific examinations. This suggests there may be a difference in AI ability by medical topic or question difficulty.
View Article and Find Full Text PDFJMIR Med Educ
January 2025
Department of Ultrasound, Peking University First Hospital, 8 Xishiku Rd, Xicheng District, Beijing, 100034, China, 86 13132150190, 86 314521.
Background: Artificial intelligence advancements have enabled large language models to significantly impact radiology education and diagnostic accuracy.
Objective: This study evaluates the performance of mainstream large language models, including GPT-4, Claude, Bard, Tongyi Qianwen, and Gemini Pro, in radiology board exams.
Methods: A comparative analysis of 150 multiple-choice questions from radiology board exams without images was conducted.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!