Captioning is the process of transcribing speech and acoustical information into text to help deaf and hard of hearing people accessing to the auditory track of audiovisual media. In addition to the verbal transcription, it includes information such as sound effects, speaker identification, or music tagging. However, it just takes into account a limited spectrum of the whole acoustic information available in the soundtrack, and hence, an important amount of emotional information is lost when attending just to the normative compliant captions. In this article, it is shown, by means of behavioral and EEG measurements, how emotional information related to sounds and music used by the creator in the audiovisual work is perceived differently by normal hearing group and hearing disabled group when applying standard captioning. Audio and captions activate similar processing areas, respectively, in each group, although not with the same intensity. Moreover, captions require higher activation of voluntary attentional circuits, as well as language-related areas. Captions transcribing musical information increase attentional activity, instead of emotional processing.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7040021 | PMC |
http://dx.doi.org/10.3389/fnint.2020.00001 | DOI Listing |
iScience
January 2025
Montreal Centre for Brain, Music and Sound (BRAMS), Montreal, QC, Canada.
People synchronize their movements more easily to rhythms with tempi closer to their preferred motor rates than with faster or slower ones. More efficient coupling at one's preferred rate, compared to faster or slower rates, should be associated with lower cognitive demands and better attentional entrainment, as predicted by dynamical system theories of perception and action. We show that synchronizing one's finger taps to metronomes at tempi outside of their preferred rate evokes larger pupil sizes, a proxy for noradrenergic attention, relative to passively listening.
View Article and Find Full Text PDFCogn Neurodyn
December 2025
School of Integrated Circuits, Shandong University, 1500 Shunhua Road, Jinan, Shandong 250101 China.
Pitch plays an essential role in music perception and forms the fundamental component of melodic interpretation. However, objectively detecting and decoding brain responses to musical pitch perception across subjects remains to be explored. In this study, we employed electroencephalography (EEG) as an objective measure to obtain the neural responses of musical pitch perception.
View Article and Find Full Text PDFHum Brain Mapp
January 2025
Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada.
Perception and production of music and speech rely on auditory-motor coupling, a mechanism which has been linked to temporally precise oscillatory coupling between auditory and motor regions of the human brain, particularly in the beta frequency band. Recently, brain imaging studies using magnetoencephalography (MEG) have also shown that accurate auditory temporal predictions specifically depend on phase coherence between auditory and motor cortical regions. However, it is not yet clear whether this tight oscillatory phase coupling is an intrinsic feature of the auditory-motor loop, or whether it is only elicited by task demands.
View Article and Find Full Text PDFJ Neurosci
January 2025
Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742.
Hearing is an active process in which listeners must detect and identify sounds, segregate and discriminate stimulus features, and extract their behavioral relevance. Adaptive changes in sound detection can emerge rapidly, during sudden shifts in acoustic or environmental context, or more slowly as a result of practice. Although we know that context- and learning-dependent changes in the sensitivity of auditory cortical (ACX) neurons support many aspects of perceptual plasticity, the contribution of subcortical auditory regions to this process is less understood.
View Article and Find Full Text PDFComput Biol Med
December 2024
École de technologie supérieure, 1100 Notre-Dame St W, Montreal, H3C 1K3, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), 527 Rue Sherbrooke O #8, Montréal, QC H3A 1E3, Canada. Electronic address:
Background: Although stress plays a key role in tinnitus and decreased sound tolerance, conventional hearing devices used to manage these conditions are not currently capable of monitoring the wearer's stress level. The aim of this study was to assess the feasibility of stress monitoring with an in-ear device.
Method: In-ear heartbeat sounds and clinical-grade electrocardiography (ECG) signals were simultaneously recorded while 30 healthy young adults underwent a stress protocol.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!