Unlabelled: There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.
Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9510350 | PMC |
http://dx.doi.org/10.1007/s12369-022-00926-6 | DOI Listing |
Behav Res Methods
December 2024
CAP Team, Centre de Recherche en Neurosciences de Lyon - INSERM U1028 - CNRS UMR 5292 - UCBL - UJM, 95 Boulevard Pinel, 69675, Bron, France.
Artificial intelligence techniques offer promising avenues for exploring human body features from videos, yet no freely accessible tool has reliably provided holistic and fine-grained behavioral analyses to date. To address this, we developed a machine learning tool based on a two-level approach: a first lower-level processing using computer vision for extracting fine-grained and comprehensive behavioral features such as skeleton or facial points, gaze, and action units; a second level of machine learning classification coupled with explainability providing modularity, to determine which behavioral features are triggered by specific environments. To validate our tool, we filmed 16 participants across six conditions, varying according to the presence of a person ("Pers"), a sound ("Snd"), or silence ("Rest"), and according to emotional levels using self-referential ("Self") and control ("Ctrl") stimuli.
View Article and Find Full Text PDFCortex
December 2024
Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
Background: Language is multimodal and situated in rich visual contexts. Language is also incremental, unfolding moment-to-moment in real time, yet few studies have examined how spoken language interacts with gesture and visual context during multimodal language processing. Gesture is a rich communication cue that is integrally related to speech and often depicts concrete referents from the visual world.
View Article and Find Full Text PDFAnim Cogn
September 2024
Department of Behavioral & Cognitive Biology, University of Vienna, Vienna, 1030, Austria.
In human infants, the ability to show gaze alternations between an object of interest and another individual is considered fundamental to the development of complex social-cognitive abilities. Here we show that well-socialised dog puppies show gaze alternations in two contexts at an early age, 6-7 weeks. Thus, 69.
View Article and Find Full Text PDFSchizophr Bull
September 2024
Department of Psychology, Indiana University-Bloomington, Bloomington, IN, USA.
J Sports Sci
July 2024
Amsterdam Movement Sciences and Institute for Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.
The majority of a football referee's time is spent assessing open-play situations, yet little is known about how referees search for information during this uninterrupted play. The aim of the current study was to examine the exploratory gaze behaviour of elite and sub-elite football referees in open-play game situations. Four elite (i.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!