Speech discrimination is used by audiologists in diagnosing and determining treatment for hearing loss patients. Usually, assessing speech discrimination requires subjective responses. Using electroencephalography (EEG), a method that is based on event-related potentials (ERPs), could provide objective speech discrimination. In this work we proposed a visual-ERP-based method to assess speech discrimination using pictures that represent word meaning. The proposed method was implemented with three strategies, each with different number of pictures and test sequences. Machine learning was adopted to classify between the task conditions based on features that were extracted from EEG signals. The results from the proposed method were compared to that of a similar visual-ERP-based method using letters and a method that is based on the auditory mismatch negativity (MMN) component. The P3 component and the late positive potential (LPP) component were observed in the two visual-ERP-based methods while MMN was observed during the MMN-based method. A total of two out of three strategies of the proposed method, along with the MMN-based method, achieved approximately 80% average classification accuracy by a combination of support vector machine (SVM) and common spatial pattern (CSP). Potentially, these methods could serve as a pre-screening tool to make speech discrimination assessment more accessible, particularly in areas with a shortage of audiologists.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002564 | PMC |
http://dx.doi.org/10.3390/s22072702 | DOI Listing |
Sci Rep
January 2025
RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.
Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA.
Auditory perception requires categorizing sound sequences, such as speech or music, into classes, such as syllables or notes. Auditory categorization depends not only on the acoustic waveform, but also on variability and uncertainty in how the listener perceives the sound - including sensory and stimulus uncertainty, the listener's estimated relevance of the particular sound to the task, and their ability to learn the past statistics of the acoustic environment. Whereas these factors have been studied in isolation, whether and how these factors interact to shape categorization remains unknown.
View Article and Find Full Text PDFPLoS One
January 2025
Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom.
Background: Cochlear implants (CI) with off-the-ear (OTE) and behind-the-ear (BTE) speech processors differ in user experience and audiological performance, impacting speech perception, comfort, and satisfaction.
Objectives: This systematic review explores audiological outcomes (speech perception in quiet and noise) and non-audiological factors (device handling, comfort, cosmetics, overall satisfaction) of OTE and BTE speech processors in CI recipients.
Methods: We conducted a systematic review following PRISMA-S guidelines, examining Medline, Embase, Cochrane Library, Scopus, and ProQuest Dissertations and Theses.
Prior research has indicated musicians show an auditory processing advantage in phonemic processing of language. The aim of the current study was to elucidate when in the auditory cortical processing stream this advantage emerges in a cocktail-party-like environment. Participants (n = 34) were aged 18-35 years and deemed to be either a musician (10+-year experience) or nonmusician (no formal training).
View Article and Find Full Text PDFInt J Audiol
January 2025
Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.
Objectives: An improvement in speech perception is a major well-documented benefit of cochlear implantation (CI), which is commonly discussed with CI candidates to set expectations. However, a large variability exists in speech perception outcomes. We evaluated the accuracy of clinical predictions of post-CI speech perception scores.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!