Medical Extended Reality for Radiology Education and Training.

J Am Coll Radiol

Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts; Executive Director, Medical Extended Reality Lab, Mass General Brigham, Boston, Massachusetts; Director of Interventional Radiology Research, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts.

Published: October 2024

Medical extended reality (MXR), encompassing augmented reality, virtual reality, and mixed reality (MR), presents a novel paradigm in radiology training by offering immersive, interactive, and realistic learning experiences in health care. Although traditional educational tools in the field of radiology are essential, it is necessary to capitalize on the innovative and emerging educational applications of extended reality (XR) technologies. At the most basic level of learning anatomy, XR has been extensively used with an emphasis on its superiority over conventional learning methods, especially in spatial understanding and recall. For imaging interpretation, XR has fostered the concepts of virtual reading rooms by enabling collaborative learning environments and enhancing image analysis and understanding. Moreover, image-guided interventions in interventional radiology have witnessed an uptick in XR utilization, illustrating its effectiveness in procedural training and skill acquisition for medical students and residents in a safe and risk-free environment. However, there remain several challenges and limitations for XR in radiology education, including technological, economic, and ergonomic challenges and and integration into existing curricula. This review explores the transformative potential of MXR in radiology education and training along with insights on the future of XR in radiology education, forecasting advancements in immersive simulations, artificial intelligence integration for personalized learning, and the potential of cloud-based XR platforms for remote and collaborative training. In summation, MXR's burgeoning role in reshaping radiology education offers a safer, scalable, and more efficient training model that aligns with the dynamic healthcare landscape.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2024.05.006DOI Listing

Publication Analysis

Top Keywords

radiology education
20
extended reality
12
medical extended
8
radiology
8
education training
8
reality
6
training
6
education
5
learning
5
reality radiology
4

Similar Publications

Purpose: To create tridimensional (3D) anatomical models of diaphyseal fractures in dogs (3D AMDFD) and to evaluate the models from their radiographs.

Methods: The study consisted of six stages: preparation of femur from a healthy dog cadaver; digitalization of the bone through a 3D scanner and creation of the base model; creation of a 3D AMDFD based on the image of the base model, 3D modeling carried out to reproduce five different types of diaphyseal fractures; printing the models produced on a 3D printer with a thermoplastic material; insertion of neodymium magnets in the fracture line to allow the assembly and disassembly of the parts; and radiography of 3D AMDFD in lateromedial and craniocaudal positions.

Results: The base model and 3D AMDFD had high precision in the replication of bone structures, like the bone in natura.

View Article and Find Full Text PDF

Good practices in artificial intelligence (AI) model validation are key for achieving trustworthy AI. Within the cancer imaging domain, attracting the attention of clinical and technical AI enthusiasts, this work discusses current gaps in AI validation strategies, examining existing practices that are common or variable across technical groups (TGs) and clinical groups (CGs). The work is based on a set of structured questions encompassing several AI validation topics, addressed to professionals working in AI for medical imaging.

View Article and Find Full Text PDF

Background: The large language model ChatGPT can now accept image input with the GPT4-vision (GPT4V) version. We aimed to compare the performance of GPT4V to pretrained U-Net and vision transformer (ViT) models for the identification of the progression of multiple sclerosis (MS) on magnetic resonance imaging (MRI).

Methods: Paired coregistered MR images with and without progression were provided as input to ChatGPT4V in a zero-shot experiment to identify radiologic progression.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!