Most research on nonverbal emotional vocalizations is based on actor portrayals, but how similar are they to the vocalizations produced spontaneously in everyday life? Perceptual and acoustic differences have been discovered between spontaneous and volitional laughs, but little is known about other emotions. We compared 362 acted vocalizations from seven corpora with 427 authentic vocalizations using acoustic analysis, and 278 vocalizations (139 authentic and 139 acted) were also tested in a forced-choice authenticity detection task ( N = 154 listeners). Target emotions were: achievement, amusement, anger, disgust, fear, pain, pleasure, and sadness. Listeners distinguished between authentic and acted vocalizations with accuracy levels above chance across all emotions (overall accuracy 65%). Accuracy was highest for vocalizations of achievement, anger, fear, and pleasure, which also displayed the largest differences in acoustic characteristics. In contrast, both perceptual and acoustic differences between authentic and acted vocalizations of amusement, disgust, and sadness were relatively small. Acoustic predictors of authenticity included higher and more variable pitch, lower harmonicity, and less regular temporal structure. The existence of perceptual and acoustic differences between authentic and acted vocalizations for all analysed emotions suggests that it may be useful to include spontaneous expressions in datasets for psychological research and affective computing.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/17470218.2016.1270976 | DOI Listing |
Trends Hear
January 2025
Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.
View Article and Find Full Text PDFJ Neurosci
January 2025
Department of Psychology, Chinese University of Hong Kong, Hong Kong SAR, China
The extraction and analysis of pitch underpin speech and music recognition, sound segregation, and other auditory tasks. Perceptually, pitch can be represented as a helix composed of two factors: height monotonically aligns with frequency, while chroma cyclically repeats at doubled frequencies. Although the early perceptual and neurophysiological mechanisms for extracting pitch from acoustic signals have been extensively investigated, the equally essential subsequent stages that bridge to high-level auditory cognition remain less well understood.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
Second High School Attached to Beijing Normal University, Beijing 100088, China.
This study investigates the acoustic cues for listeners to differentiate checked syllables and tones from unchecked ones. In Xiapu Min, checked and unchecked syllables and tones differ in f0, glottalization, and duration, whereas these differences are reduced in their sandhi forms. In citation forms, listeners utilize all three cues while relying on duration the most.
View Article and Find Full Text PDFMed J Islam Repub Iran
October 2024
Plastic and Reconstructive Surgery, Hazrat Fatemeh Hospital, School of Medicine, Iran University of Medical Sciences, Tehran, Iran.
Background: Compensatory errors are a conventional part of an articulation disorder identified by speech pathologists in patients with Cleft palate (CP). This study aimed to evaluate the effect of new mixed articulation therapy on the perceptual and acoustic features of these errors.
Methods: The single-case experimental design, ABA design, was used in this study.
J Voice
January 2025
Department of Communication Sciences and Disorders, Bowling Green State University, Bowling Green, OH.
Objectives: This study aimed to identify voice instabilities across registration shifts produced by untrained female singers and describe them relative to changes in fundamental frequency, airflow, intensity, inferred adduction, and acoustic spectra.
Study Design: Multisignal descriptive study.
Methods: Five untrained female singers sang up to 30 repetitions of octave scales.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!