The question of whether bilingualism leads to advantages or disadvantages in linguistic abilities has been debated for many years. It is unclear whether growing up with one versus two languages is related to variations in the ability to process speech in the presence of background noise. We present findings from a word recognition and a word learning task with monolingual and bilingual adults. Bilinguals appear to be less accurate than monolinguals at identifying familiar words in the presence of white noise. However, the bilingual "disadvantage" identified during word recognition is not present when listeners were asked to acquire novel word-object relations that were trained either in noise or in quiet. This work suggests that linguistic experience and the demands associated with the type of task both play a role in the ability for listeners to process speech in noise.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6861599 | PMC |
http://dx.doi.org/10.1177/0023830919846158 | DOI Listing |
J Cogn Neurosci
January 2025
Universidade de Lisboa, Lisbon, Portugal.
Behavioral research has shown that inconsistency in spelling-to-sound mappings slows visual word recognition and word naming. However, the time course of this effect remains underexplored. To address this, we asked skilled adult readers to perform a 1-back repetition detection task that did not explicitly involve phonological coding, in which we manipulated lexicality (high-frequency words vs.
View Article and Find Full Text PDFJ Psycholinguist Res
January 2025
Department of Chinese Language Studies, Centre for Research on Chinese Language and Education, The Education University of Hong Kong, Tai Po, N.T, Hong Kong.
Word recognition is a fundamental reading skill that relies on various linguistic and cognitive abilities. While executive functions (EF) have gained attention for their importance in developing literacy skills, their interaction with domain-specific skills in facilitating reading among different learner groups remains understudied. This study examines the relationship between EF, orthographic awareness, morphological awareness, and Chinese word recognition in 204 Chinese as a second language (CSL) students and 419 native Chinese primary students.
View Article and Find Full Text PDFeNeuro
January 2025
Hearing Technology @ WAVES, Department of Information Technology, Ghent University, Technologiepark 216, 9052 Zwijnaarde, Belgium
Speech intelligibility declines with age and sensorineural hearing damage (SNHL). However, it remains unclear whether cochlear synaptopathy (CS), a recently discovered form of SNHL, significantly contributes to this issue. CS refers to damaged auditory-nerve synapses that innervate the inner hair cells and there is currently no go-to diagnostic test available.
View Article and Find Full Text PDFCogn Neuropsychol
January 2025
Department of Psychological Sciences, Rice University, Houston, Texas, USA.
Many aspects of human performance require producing sequences of items in serial order. The current study takes a multiple-case approach to investigate whether the system responsible for serial order is shared across cognitive domains, focusing on working memory (WM) and word production. Serial order performance in three individuals with post-stroke language and verbal WM disorders (hereafter persons with aphasia, PWAs) were assessed using recognition and recall tasks for verbal and visuospatial WM, as well as error analyses in spoken and written production tasks to assess whether there was a tendency to produce the correct phonemes/letters in the wrong order.
View Article and Find Full Text PDFEar Hear
December 2024
Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA.
Objectives: To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!