The two-process theory of face processing: modifications based on two decades of data from infants and adults.

Neurosci Biobehav Rev

Neurocognitive Development Lab, Faculty of Psychology, University of Warsaw, Stawki 5/7, 00-183 Warsaw, Poland.

Published: March 2015

Johnson and Morton (1991. Biology and Cognitive Development: The Case of Face Recognition. Blackwell, Oxford) used Gabriel Horn's work on the filial imprinting model to inspire a two-process theory of the development of face processing in humans. In this paper we review evidence accrued over the past two decades from infants and adults, and from other primates, that informs this two-process model. While work with newborns and infants has been broadly consistent with predictions from the model, further refinements and questions have been raised. With regard to adults, we discuss more recent evidence on the extension of the model to eye contact detection, and to subcortical face processing, reviewing functional imaging and patient studies. We conclude with discussion of outstanding caveats and future directions of research in this field.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neubiorev.2014.10.009DOI Listing

Publication Analysis

Top Keywords

face processing
12
two-process theory
8
infants adults
8
face
4
theory face
4
processing modifications
4
modifications based
4
based decades
4
decades data
4
data infants
4

Similar Publications

Emotion processing is an integral part of everyone's life. The basic neural circuits involved in emotion perception are becoming clear, though the emotion's cognitive processing remains under investigation. Utilizing the stereo-electroencephalograph with high temporal-spatial resolution, this study aims to decipher the neural pathway responsible for discriminating low-arousal and high-arousal emotions.

View Article and Find Full Text PDF

The role of dynamic shape cues in the recognition of emotion from naturalistic body motion.

Atten Percept Psychophys

January 2025

Department of Psychology, Rutgers University - New Brunswick, 152 Frelinghuysen Rd, Piscataway, NJ, 08854, USA.

Human observers can often judge emotional or affective states from bodily motion, even in the absence of facial information, but the mechanisms underlying this inference are not completely understood. Important clues come from the literature on "biological motion" using point-light displays (PLDs), which convey human action, and possibly emotion, apparently on the basis of body movements alone. However, most studies have used simplified and often exaggerated displays chosen to convey emotions as clearly as possible.

View Article and Find Full Text PDF

Neural mechanisms underlying the interactive exchange of facial emotional expressions.

Soc Cogn Affect Neurosci

January 2025

Department of Psychology, Clinical Psychology and Psychotherapy, Regensburg University.

Facial emotional expressions are crucial in face-to-face social interactions, and recent findings have highlighted their interactive nature. However, the underlying neural mechanisms remain unclear. This EEG study investigated whether the interactive exchange of facial expressions modulates socio-emotional processing.

View Article and Find Full Text PDF

Representation models and processing operators for quantum informational multi-media.

PLoS One

January 2025

College of Information Science and Technology & College of Artificial Intelligence, Nanjing Forestry University, Nanjing, China.

To enhance the efficacy of multimedia quantum processing and diminish processing overhead, an advanced multimedia quantum representation model and quantum video display framework are devised. A range of framework processing operators are also developed, including an image color compensation operator, a bit plane inversion operator, and a frame displacement operator. In addition, to address image security issues, two quantum image operations have been proposed: color transformation operation and pixel blending operation.

View Article and Find Full Text PDF

Facial expression recognition faces great challenges due to factors such as face similarity, image quality, and age variation. Although various existing end-to-end Convolutional Neural Network (CNN) architectures have achieved good classification results in facial expression recognition tasks, these network architectures share a common drawback that the convolutional kernel can only compute the correlation between elements of a localized region when extracting expression features from an image. This leads to difficulties for the network to explore the relationship between all the elements that make up a complete expression.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!