Increased reliance on top-down information to compensate for reduced bottom-up use of acoustic cues in dyslexia.

Psychon Bull Rev

Department of Special Education, University of Haifa, Mount Carmel, 31905, Haifa, Israel.

Published: February 2022

Speech recognition is a complex human behavior in the course of which listeners must integrate the detailed phonetic information present in the acoustic signal with their general linguistic knowledge. It is commonly assumed that this process occurs effortlessly for most people, but it is still unclear whether this also holds true in the case of developmental dyslexia (DD), a condition characterized by perceptual deficits. In the present study, we used a dual-task setting to test the assumption that speech recognition is effortful for people with DD. In particular, we tested the Ganong effect (i.e., lexical bias on phoneme identification) while participants performed a secondary task of either low or high cognitive demand. We presumed that reduced efficiency in perceptual processing in DD would manifest in greater modulation in the performance of primary task by cognitive load. Results revealed that this was indeed the case. We found a larger Ganong effect in the DD group under high than under low cognitive load, and this modulation was larger than it was for typically developed (TD) readers. Furthermore, phoneme categorization was less precise in the DD group than in the TD group. These findings suggest that individuals with DD show increased reliance on top-down lexically mediated perception processes, possibly as a compensatory mechanism for reduced efficiency in bottom-up use of acoustic cues. This indicates an imbalance between bottom-up and top-down processes in speech recognition of individuals with DD.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8858289PMC
http://dx.doi.org/10.3758/s13423-021-01996-9DOI Listing

Publication Analysis

Top Keywords

speech recognition
12
increased reliance
8
reliance top-down
8
bottom-up acoustic
8
acoustic cues
8
reduced efficiency
8
cognitive load
8
top-down compensate
4
compensate reduced
4
reduced bottom-up
4

Similar Publications

Artificial intelligence (AI) scribe applications in the healthcare community are in the early adoption phase and offer unprecedented efficiency for medical documentation. They typically use an application programming interface with a large language model (LLM), for example, generative pretrained transformer 4. They use automatic speech recognition on the physician-patient interaction, generating a full medical note for the encounter, together with a draft follow-up e-mail for the patient and, often, recommendations, all within seconds or minutes.

View Article and Find Full Text PDF

Polariton lattices as binarized neuromorphic networks.

Light Sci Appl

January 2025

Spin-Optics laboratory, St. Petersburg State University, St. Petersburg, 198504, Russia.

We introduce a novel neuromorphic network architecture based on a lattice of exciton-polariton condensates, intricately interconnected and energized through nonresonant optical pumping. The network employs a binary framework, where each neuron, facilitated by the spatial coherence of pairwise coupled condensates, performs binary operations. This coherence, emerging from the ballistic propagation of polaritons, ensures efficient, network-wide communication.

View Article and Find Full Text PDF

When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding.

View Article and Find Full Text PDF

Background: Pediatric cochlear implant (CI) recipients with cochlear malformations face challenges due to variable speech recognition outcomes.

Aims/objectives: This study assesses the predictive value of intraoperative electrically evoked compound action potential (eCAP) thresholds, residual hearing, age at implantation, Intelligent Quotient (IQ), and malformation type for speech recognition outcomes.

Material And Methods: A prospective cohort of 52 children (aged 1-4 years) with cochlear malformations who underwent CI between 2016 and 2024 was analyzed.

View Article and Find Full Text PDF

Objectives: This study was designed to (1) compare preactivation and postactivation performance with a cochlear implant for children with functional preoperative low-frequency hearing, (2) compare outcomes of electric-acoustic stimulation (EAS) versus electric-only stimulation (ES) for children with versus without hearing preservation to understand the benefits of low-frequency acoustic cues, and (3) to investigate the relationship between postoperative acoustic hearing thresholds and performance.

Design: This was a prospective, 12-month between-subjects trial including 24 pediatric cochlear implant recipients with preoperative low-frequency functional hearing. Participant ages ranged from 5 to 17 years old.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!