An important part of understanding speech motor control consists of capturing the interaction between speech production and speech perception. This study tests a prediction of theoretical frameworks that have tried to account for these interactions: If speech production targets are specified in auditory terms, individuals with better auditory acuity should have more precise speech targets, evidenced by decreased within-phoneme variability and increased between-phoneme distance. A study was carried out consisting of perception and production tasks in counterbalanced order. Auditory acuity was assessed using an adaptive speech discrimination task, while production variability was determined using a pseudo-word reading task. Analyses of the production data were carried out to quantify average within-phoneme variability, as well as average between-phoneme contrasts. Results show that individuals not only vary in their production and perceptual abilities, but that better discriminators have more distinctive vowel production targets-that is, targets with less within-phoneme variability and greater between-phoneme distances-confirming the initial hypothesis. This association between speech production and perception did not depend on local phoneme density in vowel space. This study suggests that better auditory acuity leads to more precise speech production targets, which may be a consequence of auditory feedback affecting speech production over time.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.5006899 | DOI Listing |
Dev Sci
March 2025
Department of Pediatrics and Adolescent Medicine, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria.
Newborns are able to neurally discriminate between speech and nonspeech right after birth. To date it remains unknown whether this early speech discrimination and the underlying neural language network is associated with later language development. Preterm-born children are an interesting cohort to investigate this relationship, as previous studies have shown that preterm-born neonates exhibit alterations of speech processing and have a greater risk of later language deficits.
View Article and Find Full Text PDFiScience
January 2025
Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX 77030, United States of America.
Speech production engages a distributed network of cortical and subcortical brain regions. The supplementary motor area (SMA) has long been thought to be a key hub in coordinating across these regions to initiate voluntary movements, including speech. We analyzed direct intracranial recordings from 115 patients with epilepsy as they articulated a single word in a subset of trials from a picture-naming task.
View Article and Find Full Text PDFBMJ Open
December 2024
Clinical Sciences, Murdoch Children's Research Institute, Melbourne, Victoria, Australia.
Introduction: Infants born very preterm (VPT, <32 weeks' gestation) are at increased risk for neurodevelopmental impairments including motor, cognitive and behavioural delay. Parents of infants born VPT also have poorer mental health outcomes compared with parents of infants born at term.We have developed an intervention programme called TEDI-Prem (Telehealth for Early Developmental Intervention in babies born very preterm) based on previous research.
View Article and Find Full Text PDFBMC Health Serv Res
January 2025
Department of Speech and Language Pathology, School of Rehabilitation Sciences, Hamadan University of Medical Sciences, Hamadan, Iran.
Introduction: Communication disorders are one of the most common disorders that, if not treated in childhood, can cause many social, educational, and psychological problems in adulthood. One of the technologies that can be helpful in these disorders is mobile health (m-Health) technology. This study aims to examine the attitude and willingness to use this technology and compare the advantages and challenges of this technology and face-to-face treatment from the perspective of patients.
View Article and Find Full Text PDFAnn N Y Acad Sci
January 2025
Hainan Institute, Zhejiang University, Sanya, China.
In this paper, we introduce FUSION-ANN, a novel artificial neural network (ANN) designed for acoustic emission (AE) signal classification. FUSION-ANN comprises four distinct ANN branches, each housing an independent multilayer perceptron. We extract denoised features of speech recognition such as linear predictive coding, Mel-frequency cepstral coefficient, and gammatone cepstral coefficient to represent AE signals.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!