ENTICE aimed to use co-creative methodologies in order to build a solid creation pipeline for medical experiential content. The project has developed and evaluated immersive learning resources and tools aiming to support well-defined learning objectives using tangible and intangible resources (AR/VR/MR, 3D printing) that are highly sought in the fields of anatomy and surgery. In this paper the preliminary results from the evaluation of the learning resources and tools in 3 countries as well as the lessons learnt are presented towards to the improvement of the medical education process.

Download full-text PDF

Source
http://dx.doi.org/10.3233/SHTI230167DOI Listing

Publication Analysis

Top Keywords

learning resources
8
resources tools
8
streamlining tangible
4
tangible printed
4
printed intangible
4
intangible content
4
content creation
4
creation evaluation
4
evaluation entice
4
entice experience
4

Similar Publications

Global Use, Adaptation, and Sharing of Massive Open Online Courses for Emergency Health on the OpenWHO Platform: Survey Study.

J Med Internet Res

January 2025

Learning and Capacity Development Unit, Health Emergencies Programme, World Health Organization, Geneva, Switzerland.

Background: The COVID-19 pandemic demonstrated the global need for accessible content to rapidly train health care workers during health emergencies. The massive open access online course (MOOC) format is a broadly embraced strategy for widespread dissemination of trainings. Yet, barriers associated with technology access, language, and cultural context limit the use of MOOCs, particularly in lower-resource communities.

View Article and Find Full Text PDF

The Impact of Artificial Intelligence and Machine Learning in Organ Retrieval and Transplantation: A Comprehensive Review.

Curr Res Transl Med

January 2025

Department of Research and Innovation, Medway NHS Foundation Trust, Gillingham ME7 5NY, United Kingdom; Faculty of Medicine, Health and Social Care, Canterbury Christ Church University, United Kingdom.

This narrative review examines the transformative role of Artificial Intelligence (AI) and Machine Learning (ML) in organ retrieval and transplantation. AI and ML technologies enhance donor-recipient matching by integrating and analyzing complex datasets encompassing clinical, genetic, and demographic information, leading to more precise organ allocation and improved transplant success rates. In surgical planning, AI-driven image analysis automates organ segmentation, identifies critical anatomical features, and predicts surgical outcomes, aiding pre-operative planning and reducing intraoperative risks.

View Article and Find Full Text PDF

Purpose: To explore perceptions of student learning in undergraduate nursing students who repeat the fundamentals nursing course and simultaneously take a support course.

Methods: This qualitative descriptive design was conducted at one private liberal arts college. The study included interviews with six undergraduate baccalaureate nursing students repeating the fundamentals course and their perceptions following the repeated course.

View Article and Find Full Text PDF

Background: We aimed to identify the central lifestyle, the most impactful among lifestyle factor clusters; the central health outcome, the most impactful among health outcome clusters; and the bridge lifestyle, the most strongly connected to health outcome clusters, across 29 countries to optimise resource allocation for local holistic health improvements.

Methods: From July 2020 to August 2021, we surveyed 16 461 adults across 29 countries who self-reported changes in 18 lifestyle factors and 13 health outcomes due to the pandemic. Three networks were generated by network analysis for each country: lifestyle, health outcome, and bridge networks.

View Article and Find Full Text PDF

CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning.

Med Image Comput Comput Assist Interv

October 2024

Department of Biomedical Engineering, Yale University, New Haven, CT, USA.

Recent advancements in Contrastive Language-Image Pre-training (CLIP) [21] have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!