Voice actors show enhanced neural tracking of pitch, prosody perception, and music perception.

Cortex

School of Psychological Sciences, Birkbeck, University of London, London, UK. Electronic address:

Published: September 2024

Experiences with sound that make strong demands on the precision of perception, such as musical training and experience speaking a tone language, can enhance auditory neural encoding. Are high demands on the precision of perception necessary for training to drive auditory neural plasticity? Voice actors are an ideal subject population for answering this question. Voice acting requires exaggerating prosodic cues to convey emotion, character, and linguistic structure, drawing upon attention to sound, memory for sound features, and accurate sound production, but not fine perceptual precision. Here we assessed neural encoding of pitch using the frequency-following response (FFR), as well as prosody, music, and sound perception, in voice actors and a matched group of non-actors. We find that the consistency of neural sound encoding, prosody perception, and musical phrase perception are all enhanced in voice actors, suggesting that a range of neural and behavioural auditory processing enhancements can result from training which lacks fine perceptual precision. However, fine discrimination was not enhanced in voice actors but was linked to degree of musical experience, suggesting that low-level auditory processing can only be enhanced by demanding perceptual training. These findings suggest that training which taxes attention, memory, and production but is not perceptually taxing may be a way to boost neural encoding of sound and auditory pattern detection in individuals with poor auditory skills.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cortex.2024.06.016DOI Listing

Publication Analysis

Top Keywords

voice actors
20
neural encoding
12
prosody perception
8
demands precision
8
precision perception
8
perception musical
8
auditory neural
8
fine perceptual
8
perceptual precision
8
enhanced voice
8

Similar Publications

Frequently, we perceive emotional information through multiple channels (e.g., face, voice, posture).

View Article and Find Full Text PDF

IndoWaveSentiment: Indonesian audio dataset for emotion classification.

Data Brief

December 2024

Informatics Department, Universitas Hasanuddin, Poros Malino Street Km 6, Gowa, South Sulawesi, Indonesia.

Voice is a one of media for human communication and interaction. Emotions conveyed through voice, such as laughter or tears, can communicate messages more quickly than spoken or written language. In sentiment analysis, the emotional component is crucial for reflecting human perceptions and opinions.

View Article and Find Full Text PDF

Finite-time optimal control for MMCPS via a novel preassigned-time performance approach.

Neural Netw

February 2025

College of Control Science and Engineering, Bohai University, Jinzhou 121013, Liaoning, China. Electronic address:

Article Synopsis
  • - This paper focuses on optimizing the stabilization of a macro-micro positioning stage (MMCPS) using a dynamic model rooted in Newton's second law, addressing the control of errors during positioning to ensure effective collaboration between its components.
  • - The authors implement a reinforcement learning approach, utilizing actor-critic neural networks, to enhance controller performance while managing the forces exerted by voice coil motors (VCM) and piezoelectric actuators for vibration reduction.
  • - A novel performance function is introduced to maintain the system's axis displacements within a predetermined range and time frame, proving the system's stability and showing successful simulation results that validate the proposed algorithm.
View Article and Find Full Text PDF

Aim: To compare the perspective of nurses, long-stay immigrants and cultural mediators on intercultural communication in care encounters.

Design: Qualitative secondary analysis of data obtained in two primary studies.

Methods: Two sets of data from two primary studies on nurses and long-stay immigrants (including in total two focus groups and 15 in-depth interviews) were merged.

View Article and Find Full Text PDF

An integrated empirical and computational study to decipher help-seeking behaviors and vocal stigma.

Commun Med (Lond)

November 2024

School of Communication Sciences and Disorders, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada.

Background: Professional voice users often experience stigma associated with voice disorders and are reluctant to seek medical help. This study deployed empirical and computational tools to (1) quantify the experience of vocal stigma and help-seeking behaviors in performers; and (2) predict their modulations with peer influences in social networks.

Methods: Experience of vocal stigma and information-motivation-behavioral (IMB) skills were prospectively profiled using online surveys from a total of 403 Canadians (200 singers and actors and 203 controls).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!