Bilaterally implanted cochlear implant (CI) users do not consistently have access to interaural time differences (ITDs). ITDs are crucial for restoring the ability to localize sounds and understand speech in noisy environments. Lack of access to ITDs is partly due to lack of communication between clinical processors across the ears and partly because processors must use relatively high rates of stimulation to encode envelope information. Speech understanding is best at higher stimulation rates, but sensitivity to ITDs in the timing of pulses is best at low stimulation rates. We implemented a practical "mixed rate" strategy that encodes ITD information using a low stimulation rate on some channels and speech information using high rates on the remaining channels. The strategy was tested using a bilaterally synchronized research processor, the CCi-MOBILE. Nine bilaterally implanted CI users were tested on speech understanding and were asked to judge the location of a sound based on ITDs encoded using this strategy. Performance was similar in both tasks between the control strategy and the new strategy. We discuss the benefits and drawbacks of the sound coding strategy and provide guidelines for utilizing synchronized processors for developing strategies.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11012985 | PMC |
http://dx.doi.org/10.3390/jcm13071917 | DOI Listing |
BMC Palliat Care
January 2025
Kerry Specialist Palliative Care Service, University Hospital Kerry, Tralee, Co. Kerry, Ireland.
Background: The prevalence of dry mouth in the palliative care population is well documented and increases due to polypharmacy, radiotherapy and systemic conditions. Saliva as a lubricant for the mouth and throat has implications for swallowing, chewing, and speech. The literature about the experience of xerostomia (perceived feeling of dry mouth) in palliative care is scarce.
View Article and Find Full Text PDFComput Biol Med
January 2025
Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Science, 830011, Urumqi, China; University of Chinese Academy of Sciences, 100049, Beijing, China; Xinjiang Laboratory of Minority Speech and Language Information Processing, 830011, Urumqi, China. Electronic address:
N-methyladenosine (mA) plays a crucial role in enriching RNA functional and genetic information, and the identification of mA modification sites is therefore an important task to promote the understanding of RNA epigenetics. In the identification process, current studies are mainly concentrated on capturing the short-range dependencies between adjacent nucleotides in RNA sequences, while ignoring the impact of long-range dependencies between non-adjacent nucleotides for learning high-quality representation of RNA sequences. In this work, we propose an end-to-end prediction model, called mASLD, to improve the identification accuracy of mA modification sites by capturing the short-range and long-range dependencies of nucleotides.
View Article and Find Full Text PDFJ Voice
January 2025
School of Behavioral and Brain Sciences, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, University of Texas at Dallas, Richardson, TX; Department of Otolaryngology - Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, TX. Electronic address:
Introduction: Patients with primary muscle tension dysphonia (pMTD) commonly report symptoms of vocal effort, fatigue, discomfort, odynophonia, and aberrant vocal quality (eg, vocal strain, hoarseness). However, voice symptoms most salient to pMTD have not been identified. Furthermore, how standard vocal fatigue and vocal tract discomfort indices that capture persistent symptoms-like the Vocal Fatigue Index (VFI) and Vocal Tract Discomfort Scale (VTDS)-relate to acute symptoms experienced at the time of the voice evaluation is unclear.
View Article and Find Full Text PDFPsychon Bull Rev
January 2025
Experimental Psychology, University College London, London, UK.
Hand movements frequently occur with speech. The extent to which the memories that guide co-speech hand movements are tied to the speech they occur with is unclear. Here, we paired the acquisition of a new hand movement with speech.
View Article and Find Full Text PDFeNeuro
January 2025
Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition of information about the environment. While extensive research has been dedicated to speech perception, the complexities of auditory perception within everyday environments, specifically the types of information and the key features to extract, remain less explored. Our study aims to systematically investigate the relevance of different feature categories: discrete sound-identity markers, general cognitive state information, and acoustic representations, including discrete sound onset, the envelope, and mel-spectrogram.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!