Electrocorticographic representations of segmental features in continuous speech.

Front Hum Neurosci

National Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health Albany, NY, USA ; Department of Neurology, Albany Medical College Albany, NY, USA.

Published: March 2015

Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4338752PMC
http://dx.doi.org/10.3389/fnhum.2015.00097DOI Listing

Publication Analysis

Top Keywords

speech production
12
speech
10
representations segmental
8
segmental features
8
continuous speech
8
manner articulation
8
articulation voicing
8
voicing status
8
articulation
5
electrocorticographic representations
4

Similar Publications

Newborns are able to neurally discriminate between speech and nonspeech right after birth. To date it remains unknown whether this early speech discrimination and the underlying neural language network is associated with later language development. Preterm-born children are an interesting cohort to investigate this relationship, as previous studies have shown that preterm-born neonates exhibit alterations of speech processing and have a greater risk of later language deficits.

View Article and Find Full Text PDF

Speech production engages a distributed network of cortical and subcortical brain regions. The supplementary motor area (SMA) has long been thought to be a key hub in coordinating across these regions to initiate voluntary movements, including speech. We analyzed direct intracranial recordings from 115 patients with epilepsy as they articulated a single word in a subset of trials from a picture-naming task.

View Article and Find Full Text PDF

Introduction: Infants born very preterm (VPT, <32 weeks' gestation) are at increased risk for neurodevelopmental impairments including motor, cognitive and behavioural delay. Parents of infants born VPT also have poorer mental health outcomes compared with parents of infants born at term.We have developed an intervention programme called TEDI-Prem (Telehealth for Early Developmental Intervention in babies born very preterm) based on previous research.

View Article and Find Full Text PDF

Introduction: Communication disorders are one of the most common disorders that, if not treated in childhood, can cause many social, educational, and psychological problems in adulthood. One of the technologies that can be helpful in these disorders is mobile health (m-Health) technology. This study aims to examine the attitude and willingness to use this technology and compare the advantages and challenges of this technology and face-to-face treatment from the perspective of patients.

View Article and Find Full Text PDF

In this paper, we introduce FUSION-ANN, a novel artificial neural network (ANN) designed for acoustic emission (AE) signal classification. FUSION-ANN comprises four distinct ANN branches, each housing an independent multilayer perceptron. We extract denoised features of speech recognition such as linear predictive coding, Mel-frequency cepstral coefficient, and gammatone cepstral coefficient to represent AE signals.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!