Being able to accurately perceive the emotion expressed by the facial or verbal expression from others is critical to successful social interaction. However, only few studies examined the multimodal interactions on speech emotion, and there is no consistence in studies on the speech emotion perception. It remains unclear, how the speech emotion of different valence is perceived on the multimodal stimuli by our human brain. In this paper, we conducted a functional magnetic resonance imaging (fMRI) study with an event-related design, using dynamic facial expressions and emotional speech stimuli to express different emotions, in order to explore the perception mechanism of speech emotion in audio-visual modality. The representational similarity analysis (RSA), whole-brain searchlight analysis, and conjunction analysis of emotion were used to interpret the representation of speech emotion in different aspects. Significantly, a weighted RSA approach was creatively proposed to evaluate the contribution of each candidate model to the best fitted model and provided a supplement to RSA. The results of weighted RSA indicated that the fitted models were superior to all candidate models and the weights could be used to explain the representation of ROIs. The bilateral amygdala has been shown to be associated with the processing of both positive and negative emotions except neutral emotion. It is indicated that the left posterior insula and the left anterior superior temporal gyrus (STG) play important roles in the perception of multimodal speech emotion.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neuroscience.2021.06.002 | DOI Listing |
J Acoust Soc Am
January 2025
Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands.
Previous studies suggested that pitch characteristics of lexical tones in Standard Chinese influence various sensory perceptions, but whether they iconically bias emotional experience remained unclear. We analyzed the arousal and valence ratings of bi-syllabic words in two corpora (Study 1) and conducted an affect rating experiment using a carefully designed corpus of bi-syllabic words (Study 2). Two-alternative forced-choice tasks further tested the robustness of lexical tones' affective iconicity in an auditory nonce word context (Study 3).
View Article and Find Full Text PDFQ J Exp Psychol (Hove)
January 2025
Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
This study aims to provide a comprehensive picture of auditory emotion perception in cochlear implant (CI) users by (1) investigating emotion categorization in both vocal (pseud-ospeech) and musical domains, and (2) how individual differences in residual acoustic hearing, sensitivity to voice cues (voice pitch, vocal tract length), and quality of life (QoL) might be associated with vocal emotion perception, and, going a step further, also with musical emotion perception. In 28 adult CI users, with or without self-reported acoustic hearing, we showed that sensitivity (d') scores for emotion categorization varied largely across the participants, in line with previous research. However, within participants, the d' scores for vocal and musical emotion categorization were significantly correlated, indicating similar processing of auditory emotional cues across the pseudo-speech and music domains and robustness of the tests.
View Article and Find Full Text PDFJ Commun Disord
January 2025
School of Foreign Studies, China University of Petroleum (East China), Qingdao, China. Electronic address:
Introduction: It is still under debate whether and how semantic content will modulate the emotional prosody perception in children with autism spectrum disorder (ASD). The current study aimed to investigate the issue using two experiments by systematically manipulating semantic information in Chinese disyllabic words.
Method: The present study explored the potential modulation of semantic content complexity on emotional prosody perception in Mandarin-speaking children with ASD.
The Problem: People use social media platforms to chat, search, and share information, express their opinions, and connect with others. But these platforms also facilitate the posting of divisive, harmful, and hateful messages, targeting groups and individuals, based on their race, religion, gender, sexual orientation, or political views. Hate content is not only a problem on the Internet, but also on traditional media, especially in places where the Internet is not widely available or in rural areas.
View Article and Find Full Text PDFPLoS One
January 2025
Computer Engineering, CCSIT, King Faisal University, Al Hufuf, Kingdom of Saudi Arabia.
The health of poultry flock is crucial in sustainable farming. Recent advances in machine learning and speech analysis have opened up opportunities for real-time monitoring of the behavior and health of flock. However, there has been little research on using Tiny Machine Learning (Tiny ML) for continuous vocalization monitoring in poultry.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!