Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4338752 | PMC |
http://dx.doi.org/10.3389/fnhum.2015.00097 | DOI Listing |
Dev Sci
March 2025
Department of Pediatrics and Adolescent Medicine, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria.
Newborns are able to neurally discriminate between speech and nonspeech right after birth. To date it remains unknown whether this early speech discrimination and the underlying neural language network is associated with later language development. Preterm-born children are an interesting cohort to investigate this relationship, as previous studies have shown that preterm-born neonates exhibit alterations of speech processing and have a greater risk of later language deficits.
View Article and Find Full Text PDFiScience
January 2025
Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX 77030, United States of America.
Speech production engages a distributed network of cortical and subcortical brain regions. The supplementary motor area (SMA) has long been thought to be a key hub in coordinating across these regions to initiate voluntary movements, including speech. We analyzed direct intracranial recordings from 115 patients with epilepsy as they articulated a single word in a subset of trials from a picture-naming task.
View Article and Find Full Text PDFBMJ Open
December 2024
Clinical Sciences, Murdoch Children's Research Institute, Melbourne, Victoria, Australia.
Introduction: Infants born very preterm (VPT, <32 weeks' gestation) are at increased risk for neurodevelopmental impairments including motor, cognitive and behavioural delay. Parents of infants born VPT also have poorer mental health outcomes compared with parents of infants born at term.We have developed an intervention programme called TEDI-Prem (Telehealth for Early Developmental Intervention in babies born very preterm) based on previous research.
View Article and Find Full Text PDFBMC Health Serv Res
January 2025
Department of Speech and Language Pathology, School of Rehabilitation Sciences, Hamadan University of Medical Sciences, Hamadan, Iran.
Introduction: Communication disorders are one of the most common disorders that, if not treated in childhood, can cause many social, educational, and psychological problems in adulthood. One of the technologies that can be helpful in these disorders is mobile health (m-Health) technology. This study aims to examine the attitude and willingness to use this technology and compare the advantages and challenges of this technology and face-to-face treatment from the perspective of patients.
View Article and Find Full Text PDFAnn N Y Acad Sci
January 2025
Hainan Institute, Zhejiang University, Sanya, China.
In this paper, we introduce FUSION-ANN, a novel artificial neural network (ANN) designed for acoustic emission (AE) signal classification. FUSION-ANN comprises four distinct ANN branches, each housing an independent multilayer perceptron. We extract denoised features of speech recognition such as linear predictive coding, Mel-frequency cepstral coefficient, and gammatone cepstral coefficient to represent AE signals.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!