Aim: Due to the COVID pandemic and technological innovation, robots gain increasing role in nursing services. While studies investigated negative attitudes of nurses towards robots, we lack an understanding of nurses' preferences about robot characteristics. Our aim was to explore how key robot features compare when weighed together.
Methods: Cross-sectional research design based on a conjoint analysis approach. Robot dimensions tested were: (1) communication; (2) look; (3) safety; (4) self-learning ability; and (5) interactive behaviour. Participants were asked to rank robot profile cards from most to least preferred.
Results: In order of importance, robot's ability to learn ranked first followed by behaviour, look, operating safety and communication. Most preferred robot combination was 'robot responds to commands only, looks like a machine, never misses target, runs programme only and behaves friendly'.
Conclusions: Robot self-learning capacity was least favoured by nurses showing potential fear of robots taking over core nurse competencies.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9748045 | PMC |
http://dx.doi.org/10.1002/nop2.1282 | DOI Listing |
Sci Rep
January 2025
Department of Electrical Power, Adama Science and Technology University, Adama, 1888, Ethiopia.
Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional network elements while preserving the overall network design. Differences between the two domains, such as significant variations in the scale of visual things and the higher granularity of pixels in images compared to words in the text, make it difficult to transfer Transformer from language to vision.
View Article and Find Full Text PDFInt J Med Inform
January 2025
School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China. Electronic address:
Background: In the context of routine breast cancer diagnosis, the precise discrimination between benign and malignant breast masses holds utmost significance. Notably, few prior investigations have concurrently explored the integration of imaging histology features, deep learning characteristics, and clinical parameters. The primary objective of this retrospective study was to pioneer a multimodal feature fusion model tailored for the prediction of breast tumor malignancy, harnessing the potential of ultrasound images.
View Article and Find Full Text PDFPolymers (Basel)
January 2025
Department of Mechanical Engineering, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia.
Metamaterials are pushing the limits of traditional materials and are fascinating frontiers in scientific innovation. Mechanical metamaterials (MMs) are a category of metamaterials that display properties and performances that cannot be realized in conventional materials. Exploring the mechanical properties and various aspects of vibration and damping control is becoming a crucial research area.
View Article and Find Full Text PDFSensors (Basel)
January 2025
The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.
The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Key Laboratory of Modern Agricultural Equipment, Ministry of Agriculture and Rural Affairs, Nanjing Institute of Agricultural Mechanization, Nanjing 210014, China.
To address several challenges, including low efficiency, significant damage, and high costs, associated with the manual harvesting of , in this study, a machine vision-based intelligent harvesting device was designed according to its agronomic characteristics and morphological features. This device mainly comprised a frame, camera, truss-type robotic arm, flexible manipulator, and control system. The FES-YOLOv5s deep learning target detection model was used to accurately identify and locate .
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!