Publications by authors named "Marian S Bartlett"

Facial actions are key elements of non-verbal behavior. Perceivers' reactions to others' facial expressions often represent a match or mirroring (e.g.

View Article and Find Full Text PDF

Objective: To compare clinical rating scales of blepharospasm severity with involuntary eye closures measured automatically from patient videos with contemporary facial expression software.

Methods: We evaluated video recordings of a standardized clinical examination from 50 patients with blepharospasm in the Dystonia Coalition's Natural History and Biorepository study. Eye closures were measured on a frame-by-frame basis with software known as the Computer Expression Recognition Toolbox (CERT).

View Article and Find Full Text PDF

The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior.

View Article and Find Full Text PDF

Background: Current pain assessment methods in youth are suboptimal and vulnerable to bias and underrecognition of clinical pain. Facial expressions are a sensitive, specific biomarker of the presence and severity of pain, and computer vision (CV) and machine-learning (ML) techniques enable reliable, valid measurement of pain-related facial expressions from video. We developed and evaluated a CVML approach to measure pain-related facial expressions for automated pain assessment in youth.

View Article and Find Full Text PDF

Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth.

View Article and Find Full Text PDF

The spontaneous mimicry of others' emotional facial expressions constitutes a rudimentary form of empathy and facilitates social understanding. Here, we show that human participants spontaneously match facial expressions of an android physically present in the room with them. This mimicry occurs even though these participants find the android unsettling and are fully aware that it lacks intentionality.

View Article and Find Full Text PDF

Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer recognition system that analyzes the child's facial expression in real time.

View Article and Find Full Text PDF

In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced.

View Article and Find Full Text PDF

Several lines of evidence suggest that the image statistics of the environment shape visual abilities. To date, the image statistics of natural scenes and faces have been well characterized using Fourier analysis. We employed Fourier analysis to characterize images of signs in American Sign Language (ASL).

View Article and Find Full Text PDF

The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts.

View Article and Find Full Text PDF