A Multimodal Learning Approach in an Undergraduate Palliative Care Nursing Unit of Study.

J Nurs Educ

Susan Wakil School of Nursing and Midwifery, Sydney Nursing School, Faculty of Medicine and Health, The University of Sydney.

Published: November 2024

Download full-text PDF

Source
http://dx.doi.org/10.3928/01484834-20240613-03DOI Listing

Publication Analysis

Top Keywords

multimodal learning
4
learning approach
4
approach undergraduate
4
undergraduate palliative
4
palliative care
4
care nursing
4
nursing unit
4
unit study
4
multimodal
1
approach
1

Similar Publications

A Comparison Study of Person Identification Using IR Array Sensors and LiDAR.

Sensors (Basel)

January 2025

Faculty of Science and Technology, Keio University, Yokohama 223-8522, Japan.

Person identification is a critical task in applications such as security and surveillance, requiring reliable systems that perform robustly under diverse conditions. This study evaluates the Vision Transformer (ViT) and ResNet34 models across three modalities-RGB, thermal, and depth-using datasets collected with infrared array sensors and LiDAR sensors in controlled scenarios and varying resolutions (16 × 12 to 640 × 480) to explore their effectiveness in person identification. Preprocessing techniques, including YOLO-based cropping, were employed to improve subject isolation.

View Article and Find Full Text PDF

Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation.

Sensors (Basel)

January 2025

State Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing 401133, China.

With the advancement of federated learning (FL), there is a growing demand for schemes that support multi-task learning on multi-modal data while ensuring robust privacy protection, especially in applications like intelligent connected vehicles. Traditional FL schemes often struggle with the complexities introduced by multi-modal data and diverse task requirements, such as increased communication overhead and computational burdens. In this paper, we propose a novel privacy-preserving scheme for multi-task federated split learning across multi-modal data (MTFSLaMM).

View Article and Find Full Text PDF

EdgeNet: An End-to-End Deep Neural Network Pretrained with Synthetic Data for a Real-World Autonomous Driving Application.

Sensors (Basel)

December 2024

División de Sistemas e Ingeniería Electrónica (DSIE), Campus Muralla del Mar, s/n, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain.

This paper presents a novel end-to-end architecture based on edge detection for autonomous driving. The architecture has been designed to bridge the domain gap between synthetic and real-world images for end-to-end autonomous driving applications and includes custom edge detection layers before the Efficient Net convolutional module. To train the architecture, RGB and depth images were used together with inertial data as inputs to predict the driving speed and steering wheel angle.

View Article and Find Full Text PDF

Megavoltage computed tomography (MVCT) plays a crucial role in patient positioning and dose reconstruction during tomotherapy. However, due to the limited scan field of view (sFOV), the entire cross-section of certain patients may not be fully covered, resulting in projection data truncation. Truncation artifacts in MVCT can compromise registration accuracy with the planned kilovoltage computed tomography (KVCT) and hinder subsequent MVCT-based adaptive planning.

View Article and Find Full Text PDF

Accurate depth estimation is crucial for many fields, including robotics, navigation, and medical imaging. However, conventional depth sensors often produce low-resolution (LR) depth maps, making detailed scene perception challenging. To address this, enhancing LR depth maps to high-resolution (HR) ones has become essential, guided by HR-structured inputs like RGB or grayscale images.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!