Impairments in listening tasks that require subjects to match affective-prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere-damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.

Download full-text PDF

Source
http://dx.doi.org/10.1044/jshr.3505.963DOI Listing

Publication Analysis

Top Keywords

acoustic cues
12
left- right-hemisphere-damaged
8
identification affective-prosodic
4
affective-prosodic stimuli
4
stimuli left-
4
right-hemisphere-damaged subjects
4
subjects errors
4
errors created
4
created equal
4
equal impairments
4

Similar Publications

Beta oscillations predict the envelope sharpness in a rhythmic beat sequence.

Sci Rep

January 2025

RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.

Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.

View Article and Find Full Text PDF

Integrating visual features has been proven effective for deep learning-based speech quality enhancement, particularly in highly noisy environments. However, these models may suffer from redundant information, resulting in performance deterioration when the signal-to-noise ratio (SNR) is relatively high. Real-world noisy scenarios typically exhibit widely varying noise levels.

View Article and Find Full Text PDF

Social vocalizations contain cues that reflect the motivational state of a vocalizing animal. Once perceived, these cues may in turn affect the internal state and behavioral responses of listening animals. Using the CBA/CAJ mouse model of acoustic communication, this study examined acoustic cues that signal intensity in male-female interactions, then compared behavioral responses to intense mating vocal sequences with those from another intense behavioral context, restraint.

View Article and Find Full Text PDF

Perceptual learning of modulation filtered speech.

J Exp Psychol Hum Percept Perform

January 2025

School of Psychology, University of Sussex.

Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal.

View Article and Find Full Text PDF

While animals readily adjust their behavior to adapt to relevant changes in the environment, the neural pathways enabling these changes remain largely unknown. Here, using multiphoton imaging, we investigate whether feedback from the piriform cortex to the olfactory bulb supports such behavioral flexibility. To this end, we engage head-fixed male mice in a multimodal rule-reversal task guided by olfactory and auditory cues.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!