Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction.

Int J Soc Robot

Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden.

Published: September 2022

Unlabelled: There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.

Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9510350PMC
http://dx.doi.org/10.1007/s12369-022-00926-6DOI Listing

Publication Analysis

Top Keywords

referential gaze
20
social robots
12
older adults
12
social
9
age-related differences
8
older
8
older users
8
task performance
8
gaze
6
referential
5

Similar Publications

Artificial intelligence techniques offer promising avenues for exploring human body features from videos, yet no freely accessible tool has reliably provided holistic and fine-grained behavioral analyses to date. To address this, we developed a machine learning tool based on a two-level approach: a first lower-level processing using computer vision for extracting fine-grained and comprehensive behavioral features such as skeleton or facial points, gaze, and action units; a second level of machine learning classification coupled with explainability providing modularity, to determine which behavioral features are triggered by specific environments. To validate our tool, we filmed 16 participants across six conditions, varying according to the presence of a person ("Pers"), a sound ("Snd"), or silence ("Rest"), and according to emotional levels using self-referential ("Self") and control ("Ctrl") stimuli.

View Article and Find Full Text PDF

Background: Language is multimodal and situated in rich visual contexts. Language is also incremental, unfolding moment-to-moment in real time, yet few studies have examined how spoken language interacts with gesture and visual context during multimodal language processing. Gesture is a rich communication cue that is integrally related to speech and often depicts concrete referents from the visual world.

View Article and Find Full Text PDF

In human infants, the ability to show gaze alternations between an object of interest and another individual is considered fundamental to the development of complex social-cognitive abilities. Here we show that well-socialised dog puppies show gaze alternations in two contexts at an early age, 6-7 weeks. Thus, 69.

View Article and Find Full Text PDF
Article Synopsis
  • Individuals with schizophrenia (SZ) and bipolar disorder (BD) experience disruptions in how they perceive eye contact, impacting their symptoms and social interactions, but the underlying reasons for these issues are not well understood.
  • The study analyzed the gaze perception behaviors of SZ patients, BD patients, and healthy controls using mathematical modeling to highlight key cognitive processes, revealing that SZ and BD showed less efficient evidence accumulation and unique perceptual biases.
  • The results suggest that impaired gaze perception could be linked to cognitive functioning and symptoms, especially in SZ where a cautious response strategy may affect reaction times, indicating that biases relate to severity of hallucinations and delusions.
View Article and Find Full Text PDF

How do referees visually explore? An in-situ examination of the referential head and eye movements of football referees.

J Sports Sci

July 2024

Amsterdam Movement Sciences and Institute for Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.

The majority of a football referee's time is spent assessing open-play situations, yet little is known about how referees search for information during this uninterrupted play. The aim of the current study was to examine the exploratory gaze behaviour of elite and sub-elite football referees in open-play game situations. Four elite (i.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!