Perceptual and acoustic differences between authentic and acted nonverbal emotional vocalizations.

Q J Exp Psychol (Hove)

2 Institute of Cognitive Neuroscience, University College London, London, UK.

Published: March 2018

Most research on nonverbal emotional vocalizations is based on actor portrayals, but how similar are they to the vocalizations produced spontaneously in everyday life? Perceptual and acoustic differences have been discovered between spontaneous and volitional laughs, but little is known about other emotions. We compared 362 acted vocalizations from seven corpora with 427 authentic vocalizations using acoustic analysis, and 278 vocalizations (139 authentic and 139 acted) were also tested in a forced-choice authenticity detection task ( N = 154 listeners). Target emotions were: achievement, amusement, anger, disgust, fear, pain, pleasure, and sadness. Listeners distinguished between authentic and acted vocalizations with accuracy levels above chance across all emotions (overall accuracy 65%). Accuracy was highest for vocalizations of achievement, anger, fear, and pleasure, which also displayed the largest differences in acoustic characteristics. In contrast, both perceptual and acoustic differences between authentic and acted vocalizations of amusement, disgust, and sadness were relatively small. Acoustic predictors of authenticity included higher and more variable pitch, lower harmonicity, and less regular temporal structure. The existence of perceptual and acoustic differences between authentic and acted vocalizations for all analysed emotions suggests that it may be useful to include spontaneous expressions in datasets for psychological research and affective computing.

Download full-text PDF

Source
http://dx.doi.org/10.1080/17470218.2016.1270976DOI Listing

Publication Analysis

Top Keywords

perceptual acoustic
16
acoustic differences
16
authentic acted
16
acted vocalizations
16
differences authentic
12
vocalizations
10
nonverbal emotional
8
emotional vocalizations
8
authentic
6
acted
6

Similar Publications

Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry.

Trends Hear

January 2025

Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.

Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.

View Article and Find Full Text PDF

The extraction and analysis of pitch underpin speech and music recognition, sound segregation, and other auditory tasks. Perceptually, pitch can be represented as a helix composed of two factors: height monotonically aligns with frequency, while chroma cyclically repeats at doubled frequencies. Although the early perceptual and neurophysiological mechanisms for extracting pitch from acoustic signals have been extensively investigated, the equally essential subsequent stages that bridge to high-level auditory cognition remain less well understood.

View Article and Find Full Text PDF

This study investigates the acoustic cues for listeners to differentiate checked syllables and tones from unchecked ones. In Xiapu Min, checked and unchecked syllables and tones differ in f0, glottalization, and duration, whereas these differences are reduced in their sandhi forms. In citation forms, listeners utilize all three cues while relying on duration the most.

View Article and Find Full Text PDF

Background: Compensatory errors are a conventional part of an articulation disorder identified by speech pathologists in patients with Cleft palate (CP). This study aimed to evaluate the effect of new mixed articulation therapy on the perceptual and acoustic features of these errors.

Methods: The single-case experimental design, ABA design, was used in this study.

View Article and Find Full Text PDF

Vocal Instabilities in Untrained Female Singers.

J Voice

January 2025

Department of Communication Sciences and Disorders, Bowling Green State University, Bowling Green, OH.

Objectives: This study aimed to identify voice instabilities across registration shifts produced by untrained female singers and describe them relative to changes in fundamental frequency, airflow, intensity, inferred adduction, and acoustic spectra.

Study Design: Multisignal descriptive study.

Methods: Five untrained female singers sang up to 30 repetitions of octave scales.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!