It has been shown that stimulus memory (e.g., encoding and recognition) is influenced by emotion. In terms of face memory, event-related potential (ERP) studies have shown that the encoding of emotional faces is influenced by the emotion of concomitant context, when contextual stimuli were input from a visual modality. Behavioral studies also investigated the effect of contextual emotion on subsequent recognition of neutral faces. However, there might be no studies ever investigating the context effect on face encoding and recognition, when contextual stimuli were input from other sensory modalities (e.g., an auditory modality). Additionally, it may be unknown about the neural mechanisms underlying context effects on recognition of emotional faces. Therefore, the present study aimed to use vocal expressions as contexts to investigate whether contextual emotion influences ERP responses during face encoding and recognition. To this end, participants in the present study were asked to memorize angry and neutral faces. The faces were presented concomitant with either angry or neutral vocal expressions. Subsequently, participants were asked to perform an old/new recognition task, in which only faces were presented. In the encoding phase, ERP results showed that compared to neutral vocal expression, angry vocal expressions led to smaller P1 and N170 responses to both angry and neutral faces. For angry faces, however, late positive potential (LPP) responses were increased in the angry voice condition. In the later recognition phase, N170 responses were larger for neutral-encoded faces that had been presented with angry compared to neutral vocal expressions. Preceding angry vocal expression increased FN400 and LPP responses to both neutral-encoded and angry-encoded faces, when the faces showed the encoded expression. Therefore, the present study indicates that contextual emotion with regard to vocal expression influences neural responses during face encoding and subsequent recognition.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuropsychologia.2019.107147DOI Listing

Publication Analysis

Top Keywords

vocal expressions
20
encoding recognition
16
angry vocal
12
faces
12
emotional faces
12
contextual emotion
12
neutral faces
12
face encoding
12
angry neutral
12
faces presented
12

Similar Publications

Experiences with healthcare for unilateral vocal fold paralysis: A qualitative study of the patient's perspective.

J Commun Disord

December 2024

Department of Rehabilitation Sciences, Centre for Speech and Language Sciences, Ghent University, Corneel Heymanslaan 10, Ghent 9000, Belgium; Department of Speech-Language Pathology and Audiology, Faculty of Humanities, University of Pretoria, Pretoria, South Africa.

Objective: Unilateral vocal fold paralysis (UVFP) frequently causes severe dysphonia, which necessitates multidisciplinary treatment. Literature on outcomes of interventions has primarily focused on vocal fold motility or instrumental vocal outcomes, but the perspectives of patients about the treatment process have not yet been investigated. The purpose of the study was therefore to explore patient experiences with healthcare for UVFP.

View Article and Find Full Text PDF

Case 336.

Radiology

December 2024

From the Departments of Pediatric Imaging (G.B.) and Pediatric Neurology (A.A., A.M.A.), Hôpital Universitaire de Bruxelles, Queen Fabiola Children's University Hospital, Université Libre de Bruxelles, Brussels, Belgium.

A 10-month-old female infant, who was second-born, was referred for progressive macrocephaly, axial hypotonia, developmental delay, and limb stiffness. Birth had occurred at 41 weeks, after an uneventful pregnancy and delivery, to nonconsanguineous parents. Noticeably, the child could not hold her head up at 4 months or sit at 10 months of age.

View Article and Find Full Text PDF

This 11-year case study describes the acoustic behaviour of a resident Indian Ocean humpback dolphin during commercial swim-with-dolphin activities in Mozambique. Combining data collected using low-cost action cameras with full bandwidth hydrophone recordings, we identified a temporally stable stereotyped whistle contour that met the SIGnature IDentification bout criteria. This whistle was produced with potential information-enhancing features (bi-phonation and subtle variations in frequency modulation).

View Article and Find Full Text PDF

This study was aimed to evaluate whether the efficacy of invoking anti-depressive self-statements to cope with depressed mood can be enhanced for depressed individuals by systematically guiding them to amplify the expression of conviction in their voice. Accordingly, we recruited N = 144 participants (48 clinically depressed individuals, 48 sub-clinically depressed individuals, and 48 non-depressed individuals). Participants were randomly assigned to an experimental or control condition.

View Article and Find Full Text PDF

Background: Fetal intracranial volume (ICV) can help evaluate the development of the prenatal central nervous system (CNS) from the three-dimensional (3D) attributes of the cranial structure. Accurate and rapid segmentation and calculation of the ICV are clinically significant. Virtual organ computer-aided analysis (VOCAL) is a commonly used method for measuring fetal ICV.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!