Recent electrophysiological evidence suggests a rapid acquisition of novel speaker representations during intentional voice learning. We investigated effects of learning intention on voice recognition, using a variant of the directed forgetting paradigm. In an old/new recognition task following voice learning, we compared performance and event-related brain potentials (ERPs) for studied voices, half of which had been prompted to be remembered (TBR) or forgotten (TBF). Furthermore, to assess incidental encoding of episodic information, participants indicated for each recognized test voice the ear of presentation during study. During study, TBR voices elicited more positive ERPs than TBF voices (from ∼250 ms), possibly reflecting deeper voice encoding. In parallel, subsequent recognition performance was higher for TBR than for TBF voices. Importantly, above-chance recognition for both learning conditions nevertheless suggested a degree of non-intentional voice learning. In a surprise episodic memory test for voice location, above-chance performance was observed for TBR voices only, suggesting that episodic memory for ear of presentation depended on intentional voice encoding. At test, a left posterior ERP OLD/NEW effect for both TBR and TBF voices (from ∼500 ms) reflected recognition of studied voices under both encoding conditions. By contrast, a right frontal ERP OLD/NEW effect for TBF voices only (from ∼800 ms) possibly reflected additional elaborative retrieval processes. Overall, we show that ERPs are sensitive 1) to strategic voice encoding during study (from ∼250 ms), and 2) to voice recognition at test (from ∼500 ms), with the specific pattern of ERP OLD/NEW effects partly depending on previous encoding intention.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.brainres.2019.01.028 | DOI Listing |
Neuroimage
January 2025
Department of Computer Science, University of Innsbruck, Technikerstrasse 21a, Innsbruck, 6020, Austria. Electronic address:
The objective of this study is to assess the potential of a transformer-based deep learning approach applied to event-related brain potentials (ERPs) derived from electroencephalographic (EEG) data. Traditional methods involve averaging the EEG signal of multiple trials to extract valuable neural signals from the high noise content of EEG data. However, this averaging technique may conceal relevant information.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Computer Science and Informatics, Cardiff University, Cardiff CF24 3AA, UK.
Elephant sound identification is crucial in wildlife conservation and ecological research. The identification of elephant vocalizations provides insights into the behavior, social dynamics, and emotional expressions, leading to elephant conservation. This study addresses elephant sound classification utilizing raw audio processing.
View Article and Find Full Text PDFLife (Basel)
December 2024
Neuromodulation Center and Center for Clinical Research Learning, Spaulding Rehabilitation Hospital, Massachusetts General Hospital, Harvard Medical School, 1575 Cambridge Street, Cambridge, MA 02115, USA.
Background: This study aimed to explore the potential associations between voice metrics of patients with PD and their motor symptoms.
Methods: Motor and vocal data including the Unified Parkinson's Disease Rating Scale part III (UPDRS-III), Harmonic-Noise Ratio (HNR), jitter, shimmer, and smoothed cepstral peak prominence (CPPS) were analyzed through exploratory correlations followed by univariate linear regression analyses. We employed these four voice metrics as independent variables and the total and sub-scores of the UPDRS-III as dependent variables.
Acad Emerg Med
January 2025
Department of Critical Care Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.
Background: Prehospital emergencies require providers to rapidly identify patients' medical condition and determine treatment needs. We tested whether medics' initial, written impressions of patient condition contain information that can help identify patients who require prehospital lifesaving interventions (LSI) prior to or during transport.
Methods: We analyzed free-text medic impressions of prehospital patients encountered at the scene of an accident or injury, using data from STAT MedEvac air medical transport service from 2012 to 2021.
PLoS One
January 2025
Department of Teacher Education, University of Jyväskylä, Jyväskylä, Finland.
The aim of the study was to find whether certain meaningful moments in the learning process are noticeable through features of voice and how acoustic voice analyses can be utilized in learning research. The material consisted of recordings of nine university students as they were completing tasks concerning direct electric circuits as part of their course of teacher education in physics. Prosodic features of voice-fundamental frequency (F0), sound pressure level (SPL), acoustic voice quality measured by LTAS, and pausing-were investigated.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!