Child-Robot Interaction (CRI) has become increasingly addressed in research and applications. This work proposes a system for emotion recognition in children, recording facial images by both visual (RGB-red, green and blue) and Infrared Thermal Imaging (IRTI) cameras. For this purpose, the Viola-Jones algorithm is used on color images to detect facial regions of interest (ROIs), which are transferred to the thermal camera plane by multiplying a homography matrix obtained through the calibration process of the camera system. As a novelty, we propose to compute the error probability for each ROI located over thermal images, using a reference frame manually marked by a trained expert, in order to choose that ROI better placed according to the expert criteria. Then, this selected ROI is used to relocate the other ROIs, increasing the concordance with respect to the reference manual annotations. Afterwards, other methods for feature extraction, dimensionality reduction through Principal Component Analysis (PCA) and pattern classification by Linear Discriminant Analysis (LDA) are applied to infer emotions. The results show that our approach for ROI locations may track facial landmarks with significant low errors with respect to the traditional Viola-Jones algorithm. These ROIs have shown to be relevant for recognition of five emotions, specifically disgust, fear, happiness, sadness, and surprise, with our recognition system based on PCA and LDA achieving mean accuracy (ACC) and Kappa values of 85.75% and 81.84%, respectively. As a second stage, the proposed recognition system was trained with a dataset of thermal images, collected on 28 typically developing children, in order to infer one of five basic emotions (disgust, fear, happiness, sadness, and surprise) during a child-robot interaction. The results show that our system can be integrated to a social robot to infer child emotions during a child-robot interaction.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6650968 | PMC |
http://dx.doi.org/10.3390/s19132844 | DOI Listing |
Sensors (Basel)
December 2024
Department of Information Convergence Engineering, Pusan National University, Busan 46241, Republic of Korea.
Dialogue systems must understand children's utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child-robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot comprehend children's intent as accurately as humans because of their distinctive features. An LLM-based dialogue system should acquire the manner by which humans understand children's speech to enhance its intention reasoning performance in verbal interactions with children.
View Article and Find Full Text PDFFront Robot AI
December 2024
Embodied Social Agents Lab (ESAL), Department of Electrical Engineering and Computer Science (EECS), KTH Royal Institute of Technology, Stockholm, Sweden.
Creativity is an important skill that is known to plummet in children when they start school education that limits their freedom of expression and their imagination. On the other hand, research has shown that integrating social robots into educational settings has the potential to maximize children's learning outcomes. Therefore, our aim in this work was to investigate stimulating children's creativity through child-robot interactions.
View Article and Find Full Text PDFJASA Express Lett
November 2024
Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, California 90095, USA.
This paper describes an original dataset of children's speech, collected through the use of JIBO, a social robot. The dataset encompasses recordings from 110 children, aged 4-7 years old, who participated in a letter and digit identification task and extended oral discourse tasks requiring explanation skills, totaling 21 h of session data. Spanning a 2-year collection period, this dataset contains a longitudinal component with a subset of participants returning for repeat recordings.
View Article and Find Full Text PDFIEEE J Transl Eng Health Med
September 2024
Collaborative Robotics and Intelligent Systems (CoRIS) Institute, Oregon State University Corvallis OR 97331 USA.
Children worldwide are becoming increasingly inactive, leading to significant wellness challenges. Initial findings from our research team indicate that robots could potentially provide a more effective approach (compared to other age-appropriate toys) for encouraging physical activity in children. However, the basis of this past work relied on either interactions with groups of children (making it challenging to isolate specific factors that influenced activity levels) or a preliminary version of results of the present study (which centered on just a single more exploratory method for assessing child movement).
View Article and Find Full Text PDFFront Robot AI
July 2024
Service Design Major, Graduate School of Industrial Arts, University of Hongik, Seoul, Republic of Korea.
The home robot-based child activity service aims to cultivate children's social emotions. A design theme was produced by interviewing child development experts and parents. The activity service is composed of 50 plays and 70 conversations.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!