Integrating Artificial Intelligence (AI) Simulations Into Undergraduate Nursing Education: An Evolving AI Patient.

Nurs Educ Perspect

About the Authors Chelsea Lebo, MSN, RN, MEDSURG-BC, CHSE, is simulation coordinator, Department of Nursing, The College of New Jersey, Ewing. Norma Brown, MSN, RN, CHSE, is emeritus staff, Department of Nursing, The College of New Jersey. For more information, contact Chelsea Lebo at .

Published: December 2023

Utilizing an evolving artificial intelligence (AI) virtual patient that will age with students as they progress throughout the nursing program is an innovative use of simulation. The students are introduced to the AI patient as sophomores where they begin with basic patient interviewing and assessment skills. They revisit the AI patient as juniors and seniors in their medical-surgical courses, where they see the patient aging and developing complex medical conditions. As the AI patient and the student grow together, student competence increases. Students complete an evaluation at the conclusion of each simulation experience.

Download full-text PDF

Source
http://dx.doi.org/10.1097/01.NEP.0000000000001081DOI Listing

Publication Analysis

Top Keywords

artificial intelligence
8
patient
7
integrating artificial
4
intelligence simulations
4
simulations undergraduate
4
undergraduate nursing
4
nursing education
4
education evolving
4
evolving patient
4
patient utilizing
4

Similar Publications

Background: Superagers, older adults with exceptional cognitive abilities, show preserved brain structure compared to typical older adults. We investigated whether superagers have biologically younger brains based on their structural integrity.

Methods: A cohort of 153 older adults (aged 61-93) was recruited, with 63 classified as superagers based on superior episodic memory and 90 as typical older adults, of whom 64 were followed up after two years.

View Article and Find Full Text PDF

Systematic Review of Hybrid Vision Transformer Architectures for Radiological Image Analysis.

J Imaging Inform Med

January 2025

School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.

Vision transformer (ViT)and convolutional neural networks (CNNs) each possess distinct strengths in medical imaging: ViT excels in capturing long-range dependencies through self-attention, while CNNs are adept at extracting local features via spatial convolution filters. While ViT may struggle with capturing detailed local spatial information, critical for tasks like anomaly detection in medical imaging, shallow CNNs often fail to effectively abstract global context. This study aims to explore and evaluate hybrid architectures that integrate ViT and CNN to leverage their complementary strengths for enhanced performance in medical vision tasks, such as segmentation, classification, reconstruction, and prediction.

View Article and Find Full Text PDF

Rib pathology is uniquely difficult and time-consuming for radiologists to diagnose. AI can reduce radiologist workload and serve as a tool to improve accurate diagnosis. To date, no reviews have been performed synthesizing identification of rib fracture data on AI and its diagnostic performance on X-ray and CT scans of rib fractures and its comparison to physicians.

View Article and Find Full Text PDF

Rising computed tomography (CT) workloads require more efficient image interpretation methods. Digitally reconstructed radiographs (DRRs), generated from CT data, may enhance workflow efficiency by enabling faster radiological assessments. Various techniques exist for generating DRRs.

View Article and Find Full Text PDF

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers.

J Imaging Inform Med

January 2025

College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!