The neural mechanisms involved in the processing of vocalizations and music were compared, in order to observe possible similarities in the encoding of their emotional content. Positive and negative emotional vocalizations (e.g. laughing, crying) and violin musical stimuli digitally extracted from them were used as stimuli. They shared the melodic profile and main pitch/frequency characteristics. Participants listened to vocalizations or music while detecting rare auditory targets (bird tweeting, or piano's arpeggios). EEG was recorded from 128 sites. P2, N400 and Late positivity responses of ERPs were analysed. P2 peak was earlier in response to vocalizations, while P2 amplitude was larger to positive than negative stimuli. N400 was greater to negative than positive stimuli. LP was greater to vocalizations than music and to positive than negative stimuli. Source modelling using swLORETA suggested that, among N400 generators, the left middle temporal gyrus and the right uncus responded to both music and vocalizations, and more to negative than positive stimuli. The right parahippocampal region of the limbic lobe and the right cingulate cortex were active during music listening, while the left superior temporal cortex only responded to human vocalizations. Negative stimuli always activated the right middle temporal gyrus, whereas positively valenced stimuli always activated the inferior frontal cortex. The processing of emotional vocalizations and music seemed to involve common neural mechanisms. Notation obtained from acoustic signals showed how emotionally negative stimuli tended to be in Minor key, and positive stimuli in Major key, thus shedding some lights on the brain ability to understand music.

Download full-text PDF

Source
http://dx.doi.org/10.1111/ejn.14650DOI Listing

Publication Analysis

Top Keywords

vocalizations music
16
negative stimuli
16
neural mechanisms
12
positive negative
12
positive stimuli
12
stimuli
10
vocalizations
9
music
8
music vocalizations
8
emotional vocalizations
8

Similar Publications

The extraction and analysis of pitch underpin speech and music recognition, sound segregation, and other auditory tasks. Perceptually, pitch can be represented as a helix composed of two factors: height monotonically aligns with frequency, while chroma cyclically repeats at doubled frequencies. Although the early perceptual and neurophysiological mechanisms for extracting pitch from acoustic signals have been extensively investigated, the equally essential subsequent stages that bridge to high-level auditory cognition remain less well understood.

View Article and Find Full Text PDF

Music pre-processing methods are currently becoming a recognized area of research with the goal of making music more accessible to listeners with a hearing impairment. Our previous study showed that hearing-impaired listeners preferred spectrally manipulated multi-track mixes. Nevertheless, the acoustical basis of mixing for hearing-impaired listeners remains poorly understood.

View Article and Find Full Text PDF

Introduction: Vocal distortion, also known as a scream or growl, is used worldwide as an essential technique in singing, especially in rock and metal, and as an ethnic voice in Mongolian singing. However, the production mechanism of vocal distortion is not yet clearly understood owing to limited research on the behavior of the larynx, which is the source of the distorted voice.

Objectives: This study used high-speed digital imaging (HSDI) to observe the larynx of professional singers with exceptional singing skills and determine the laryngeal dynamics in the voice production of various vocal distortions.

View Article and Find Full Text PDF

Objective: To investigate the impact of music on patient tolerance during office-based laryngeal surgery (OBLS).

Methods: All patients undergoing OBLS between February 2024 to June 2024 were invited to participate in this study. They were divided into two subgroups, those with music in the background during surgery and those without.

View Article and Find Full Text PDF

Musical interactions between caregivers and their infants typically rely on a limited repertoire of live vocal songs and recorded music. Research suggests that these well-known songs are especially effective at eliciting engaged behaviors from infants in controlled settings, but how infants respond to familiar music with their caregivers in their everyday environment remains unclear. The current study used an online questionnaire to quantify how often and why caregivers present certain songs and musical recordings to their infants.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!