Background: Within cohorts of children with autism spectrum disorder (ASD) there is considerable variation in terms of language ability. In the past, it was believed that children with ASD either had delayed articulation and phonology skills or excelled in those areas compared to other language domains. Very little is known about speech sound ability in relation to language ability and non-verbal ability in Swedish preschool children with ASD.
Aim: The current study aimed to describe language variation in a group of 4-6-year-old children with ASD, focusing on in-depth analyses of speech sound error patterns with and without non-phonological language disorder and concomitant non-verbal delays.
Method & Procedures: We examined and analysed the speech sound skills (including consonant inventory, percentage of correct consonants and speech sound error patterns) in relation to receptive language skills in a sample of preschool children who had screened positive for ASD in a population-based screening at 2.5 years of age. Seventy-three children diagnosed with ASD participated and were divided into subgroups based on their receptive language (i.e., non-phonological language) and non-verbal abilities.
Outcomes & Results: The subgroup division revealed that 29 children (40%) had language delay/disorder without concurrent non-verbal general cognitive delay (ALD), 27 children (37%) had language delay/disorder with non-verbal general cognitive delay (AGD), and 17 children (23%) had language and non-verbal abilities within the normal range (ALN). Results revealed that children with ALD and children with AGD both had atypical speech sound error patterns significantly more often than the children with ALN.
Conclusions & Implications: This study showed that many children who had screened positive for ASD before age 3 years - with or without non-verbal general cognitive delays - had deficits in language as well as in speech sound ability. However, individual differences were considerable. Our results point to speech sound error patterns as a potential clinical marker for language problems (disorder/delay) in preschool children with ASD.
What This Paper Adds: What is already known on the subject Children with autism spectrum disorder (ASD) have deficits in social communication, restricted interests and repetitive behaviour. They show very considerable variation in both receptive and expressive language abilities. Previously, articulation and phonology were viewed as either delayed in children with ASD or superior compared with other (non-phonological) language domains. What this paper adds to existing knowledge Children with ASD and language disorders also have problems with speech sound error patterns. What are the potential or actual clinical implications of this work? About 75% of children with ASD experience language delays/disorders, as well as speech sound problems, related to speech sound error patterns. Understanding/acknowledging these phonological patterns and their implications can help in the diagnosis and intervention of speech sound disorders in children with ASD. Direct intervention targeting phonology might lead to language gains, but more research is needed.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/1460-6984.13099 | DOI Listing |
Vestn Otorinolaringol
December 2024
St. Petersburg Research Institute of Ear, Throat, Nose and Speech, St. Petersburg, Russia.
Unlabelled: Central auditory disorders (CSD) - this is a violation of the processing of sound stimuli, including speech, above the cochlear nuclei of the brain stem, which is mainly manifested by difficulties in speech recognition, especially in noisy environments. Children with this pathology are more likely to have behavioral problems, impaired auditory, linguistic and cognitive development, and especially difficulties with learning at school.
Objective: To analyze the literature data on the epidemiology of central auditory disorders in school-age children.
Front Psychol
December 2024
Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Ecole Normale Supérieure, PSL University, Paris, France.
Infants are exposed to a myriad of sounds early in life, including caregivers' speech, songs, human-made and natural (non-anthropogenic) environmental sounds. While decades of research have established that infants have sophisticated perceptual abilities to process speech, less is known about how they perceive natural environmental sounds. This review synthesizes current findings about the perception of natural environmental sounds in the first years of life, emphasizing their role in auditory development and describing how these studies contribute to the emerging field of human auditory ecology.
View Article and Find Full Text PDFPLoS One
December 2024
Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.
Cochlear implantation is a well-established method for restoring hearing sensation in individuals with severe to profound hearing loss. It significantly improves verbal communication for many users, despite substantial variability in patients' reports and performance on speech perception tests and quality-of-life outcome measures. Such variability in outcome measures remains several years after implantation and could reflect difficulties in attentional regulation.
View Article and Find Full Text PDFEur Arch Otorhinolaryngol
December 2024
Research Committee of Young-Otolaryngologists of the International Federations of Oto-rhino- laryngological Societies (YO-IFOS), Paris, France.
Objective: To propose a European consensus for managing and treating laryngopharyngeal reflux disease (LPRD) to guide primary care and specialist physicians.
Methods: Twenty-three European experts (otolaryngologists, gastroenterologists, surgeons) participated in a modified Delphi process to revise 38 statements about the definition, clinical management, and treatment of LPRD. Three voting rounds were conducted on a 5-point scale and a consensus was defined a priori as agreement by 80% of the experts.
J Neural Eng
December 2024
Trinity College Dublin, College Green, Dublin 2, Dublin, D02 PN40, IRELAND.
Speech comprehension involves detecting words and interpreting their meaning according to the preceding semantic context. This process is thought to be underpinned by a predictive neural system that uses that context to anticipate upcoming words. Recent work demonstrated that such a predictive process can be probed from neural signals recorded during ecologically-valid speech listening tasks by using linear lagged models, such as the temporal response function.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!