Advances in computer vision and machine learning algorithms have enabled researchers to extract facial expression data from face video recordings with greater ease and speed than standard manual coding methods, which has led to a dramatic increase in the pace of facial expression research. However, there are many limitations in recording facial expressions in laboratory settings.  Conventional video recording setups using webcams, tripod-mounted cameras, or pan-tilt-zoom cameras require making compromises between cost, reliability, and flexibility. As an alternative, we propose the use of a mobile head-mounted camera that can be easily constructed from our open-source instructions and blueprints at a fraction of the cost of conventional setups. The head-mounted camera framework is supported by the open source Python toolbox FaceSync, which provides an automated method for synchronizing videos. We provide four proof-of-concept studies demonstrating the benefits of this recording system in reliably measuring and analyzing facial expressions in diverse experimental setups, including group interaction experiments.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7059847PMC
http://dx.doi.org/10.12688/f1000research.18187.1DOI Listing

Publication Analysis

Top Keywords

facial expressions
12
open source
8
recording facial
8
facial expression
8
head-mounted camera
8
facial
5
facesync open
4
source framework
4
recording
4
framework recording
4

Similar Publications

Craniofacial development gives rise to the complex structures of the face and involves the interplay of diverse cell types. Despite its importance, our understanding of human-specific craniofacial developmental mechanisms and their genetic underpinnings remains limited. Here, we present a comprehensive single-nucleus RNA sequencing (snRNA-seq) atlas of human craniofacial development from craniofacial tissues of 24 embryos that span six key time points during the embryonic period (4-8 post-conception weeks).

View Article and Find Full Text PDF

Background: Deficits in emotion recognition have been shown to be closely related to social-cognitive functioning in schizophrenic. This study aimed to investigate the event-related potential (ERP) characteristics of social perception in schizophrenia patients and to explore the neural mechanisms underlying these abnormal cognitive processes related to social perception.

Methods: Participants included 33 schizophrenia patients and 35 healthy controls (HCs).

View Article and Find Full Text PDF

Sonic Hedgehog Determines Early Retinal Development and Adjusts Eyeball Architecture.

Int J Mol Sci

January 2025

Department of Developmental and Regenerative Biology, Medical Research Institute, Institute of Science Tokyo, Tokyo 113-8510, Japan.

The eye primordium of vertebrates initially forms exactly at the side of the head. Later, the eyeball architecture is tuned to see ahead with better visual acuity, but its molecular basis is unknown. The position of both eyes in the face alters in patients with holoprosencephaly due to () mutations that disturb the development of the ventral midline of the neural tube.

View Article and Find Full Text PDF

In this paper, we present the first publicly available 3D statistical facial shape model of babies, the Baby Face Model (BabyFM). Constructing a model of the facial geometry of babies entails specific challenges, such as occlusions, extreme and uncontrollable expressions, and data shortage. We address these challenges by proposing (1) a non-template dependent method that jointly estimates a 3D facial baby-specific template and the point-to-point correspondences; (2) a novel method to establish correspondences based on the spectral decomposition of the Laplace Beltrami Operator, which provides a more robust theoretical foundation than state-of-the-art methods; and (3) an asymmetry-swapping strategy to alleviate the shortage of large scale datasets by decoupling the identity-related and the asymmetry-related shape deformation fields.

View Article and Find Full Text PDF

This article explores the existing research evidence on the potential effectiveness of lipreading as a communication strategy to enhance speech recognition in individuals with hearing impairment. A scoping review was conducted, involving a search of six electronic databases (MEDLINE, Embase, Web of Science, Engineering Village, CINAHL, and PsycINFO) for research papers published between January 2013 and June 2023. This study included original research papers with full texts available in English, covering all study designs: qualitative, quantitative, and mixed methods.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!