Aim: Due to the COVID pandemic and technological innovation, robots gain increasing role in nursing services. While studies investigated negative attitudes of nurses towards robots, we lack an understanding of nurses' preferences about robot characteristics. Our aim was to explore how key robot features compare when weighed together.

Methods: Cross-sectional research design based on a conjoint analysis approach. Robot dimensions tested were: (1) communication; (2) look; (3) safety; (4) self-learning ability; and (5) interactive behaviour. Participants were asked to rank robot profile cards from most to least preferred.

Results: In order of importance, robot's ability to learn ranked first followed by behaviour, look, operating safety and communication. Most preferred robot combination was 'robot responds to commands only, looks like a machine, never misses target, runs programme only and behaves friendly'.

Conclusions: Robot self-learning capacity was least favoured by nurses showing potential fear of robots taking over core nurse competencies.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9748045PMC
http://dx.doi.org/10.1002/nop2.1282DOI Listing

Publication Analysis

Top Keywords

robot features
8
robot
7
nurse preferences
4
preferences caring
4
robots
4
caring robots
4
robots conjoint
4
conjoint experiment
4
experiment explore
4
explore valued
4

Similar Publications

Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional network elements while preserving the overall network design. Differences between the two domains, such as significant variations in the scale of visual things and the higher granularity of pixels in images compared to words in the text, make it difficult to transfer Transformer from language to vision.

View Article and Find Full Text PDF

Vision transformer-based multimodal fusion network for classification of tumor malignancy on breast ultrasound: A retrospective multicenter study.

Int J Med Inform

January 2025

School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China. Electronic address:

Background: In the context of routine breast cancer diagnosis, the precise discrimination between benign and malignant breast masses holds utmost significance. Notably, few prior investigations have concurrently explored the integration of imaging histology features, deep learning characteristics, and clinical parameters. The primary objective of this retrospective study was to pioneer a multimodal feature fusion model tailored for the prediction of breast tumor malignancy, harnessing the potential of ultrasound images.

View Article and Find Full Text PDF

Metamaterials are pushing the limits of traditional materials and are fascinating frontiers in scientific innovation. Mechanical metamaterials (MMs) are a category of metamaterials that display properties and performances that cannot be realized in conventional materials. Exploring the mechanical properties and various aspects of vibration and damping control is becoming a crucial research area.

View Article and Find Full Text PDF

Cross-Modal Collaboration and Robust Feature Classifier for Open-Vocabulary 3D Object Detection.

Sensors (Basel)

January 2025

The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.

The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.

View Article and Find Full Text PDF

The Application of an Intelligent -Harvesting Device Based on FES-YOLOv5s.

Sensors (Basel)

January 2025

Key Laboratory of Modern Agricultural Equipment, Ministry of Agriculture and Rural Affairs, Nanjing Institute of Agricultural Mechanization, Nanjing 210014, China.

To address several challenges, including low efficiency, significant damage, and high costs, associated with the manual harvesting of , in this study, a machine vision-based intelligent harvesting device was designed according to its agronomic characteristics and morphological features. This device mainly comprised a frame, camera, truss-type robotic arm, flexible manipulator, and control system. The FES-YOLOv5s deep learning target detection model was used to accurately identify and locate .

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!