Purpose: New perspectives on rehabilitation options for inner ear malformations have still been studied in the literature. This study investigated the cognitive, language, and motor skills of auditory brainstem implant (ABI) users in unimodal and bimodal groups.
Methods: The motor competency of the participants was assessed with Bruininks-Oseretsky Motor Proficiency Test-2 Short Form (BOT2 SF). Language performance was evaluated by the test of Early Language Development-3 and Speech Intelligibility Rating. Word identification, sentence recognition tests, and Categories of Auditory Performance were used to assess auditory perception skills. To examine the cognitive performance, Cancellation Test and Gesell Copy Form were administered. All the tests were conducted in a quiet environment without any distractions.
Results: The participants were divided into two groups: (1) 17 children in the unimodal group and (2) 11 children in the bimodal (who used a cochlear implant on one side and ABI on the other side) group. There were significant correlations between the chronological age of participants and BOT2 SF total score, cancellation tests, auditory perception tests, and language performance. Similarly, there were significant correlations between the duration of ABI use and auditory perception tests, language performance, cancellation test, and some BOT2 SF subtests ( = -0.47 to -0.60, < .001). There was no significant difference between the unimodal and bimodal groups in any task ( > .05). However, there were moderate-to-strong correlations among the auditory perception tests, cancellation test, language test, and BOT2 SF total score and subtests ( = 0.40 to 0.55, < .05).
Conclusion: Although there were no significant differences between bimodal and unimodal groups, a holistic approach, which indicates that hearing and balance issues can have broader impacts on a person's physical, emotional, social, and psychological aspects, should be used in the assessment process.
Level Of Evidence: Level 4.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10601589 | PMC |
http://dx.doi.org/10.1002/lio2.1153 | DOI Listing |
J Neurosci
January 2025
Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR 97239, USA
In everyday hearing, listeners face the challenge of understanding behaviorally relevant foreground stimuli (speech, vocalizations) in complex backgrounds (environmental, mechanical noise). Prior studies have shown that high-order areas of human auditory cortex (AC) pre-attentively form an enhanced representation of foreground stimuli in the presence of background noise. This enhancement requires identifying and grouping the features that comprise the background so they can be removed from the foreground representation.
View Article and Find Full Text PDFWhile research on auditory attention in complex acoustical environment is a thriving field, experimental studies thus far have typically treated participants as passive listeners. The present study-which combined real-time covert loudness manipulations and online probe detection-investigates for the first time to our knowledge, the effects of acoustic salience on auditory attention during live interactions, using musical improvisation as an experimental paradigm. We found that musicians were more likely to pay attention to a given co-performer when this performer was made sounding louder or softer; that such salient effect was not owing to the local variations introduced by our manipulations but rather likely to be driven by the more long-term context; and that improvisers tended to be more strongly and more stably coupled when a musician was made more salient.
View Article and Find Full Text PDFPLoS Biol
January 2025
Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
Studies of perception have long shown that the brain adds information to its sensory analysis of the physical environment. A touchstone example for humans is language use: to comprehend a physical signal like speech, the brain must add linguistic knowledge, including syntax. Yet, syntactic rules and representations are widely assumed to be atemporal (i.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Department of Special Education, Central China Normal University, Wuhan.
Purpose: This cross-sectional study explored how the speechreading ability of adults with hearing impairment (HI) in China would affect their perception of the four Mandarin Chinese lexical tones: high (Tone 1), rising (Tone 2), falling-rising (Tone 3), and falling (Tone 4). We predicted that higher speechreading ability would result in better tone performance and that accuracy would vary among individual tones.
Method: A total of 136 young adults with HI (ages 18-25 years) in China participated in the study and completed Chinese speechreading and tone awareness tests.
Elife
January 2025
State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, China.
Speech comprehension involves the dynamic interplay of multiple cognitive processes, from basic sound perception, to linguistic encoding, and finally to complex semantic-conceptual interpretations. How the brain handles the diverse streams of information processing remains poorly understood. Applying Hidden Markov Modeling to fMRI data obtained during spoken narrative comprehension, we reveal that the whole brain networks predominantly oscillate within a tripartite latent state space.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!