Individual variability as a window on production-perception interactions in speech motor control.

J Acoust Soc Am

Donders Institute for Brain, Cognition and Behaviour, Center for Cognitive Neuroimaging, Radboud University, P.O. Box 9101, Nijmegen, 6500 HB, The Netherlands.

Published: October 2017

An important part of understanding speech motor control consists of capturing the interaction between speech production and speech perception. This study tests a prediction of theoretical frameworks that have tried to account for these interactions: If speech production targets are specified in auditory terms, individuals with better auditory acuity should have more precise speech targets, evidenced by decreased within-phoneme variability and increased between-phoneme distance. A study was carried out consisting of perception and production tasks in counterbalanced order. Auditory acuity was assessed using an adaptive speech discrimination task, while production variability was determined using a pseudo-word reading task. Analyses of the production data were carried out to quantify average within-phoneme variability, as well as average between-phoneme contrasts. Results show that individuals not only vary in their production and perceptual abilities, but that better discriminators have more distinctive vowel production targets-that is, targets with less within-phoneme variability and greater between-phoneme distances-confirming the initial hypothesis. This association between speech production and perception did not depend on local phoneme density in vowel space. This study suggests that better auditory acuity leads to more precise speech production targets, which may be a consequence of auditory feedback affecting speech production over time.

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.5006899DOI Listing

Publication Analysis

Top Keywords

speech production
20
auditory acuity
12
within-phoneme variability
12
speech
10
production
10
interactions speech
8
speech motor
8
motor control
8
production targets
8
better auditory
8

Similar Publications

Newborns are able to neurally discriminate between speech and nonspeech right after birth. To date it remains unknown whether this early speech discrimination and the underlying neural language network is associated with later language development. Preterm-born children are an interesting cohort to investigate this relationship, as previous studies have shown that preterm-born neonates exhibit alterations of speech processing and have a greater risk of later language deficits.

View Article and Find Full Text PDF

Speech production engages a distributed network of cortical and subcortical brain regions. The supplementary motor area (SMA) has long been thought to be a key hub in coordinating across these regions to initiate voluntary movements, including speech. We analyzed direct intracranial recordings from 115 patients with epilepsy as they articulated a single word in a subset of trials from a picture-naming task.

View Article and Find Full Text PDF

Introduction: Infants born very preterm (VPT, <32 weeks' gestation) are at increased risk for neurodevelopmental impairments including motor, cognitive and behavioural delay. Parents of infants born VPT also have poorer mental health outcomes compared with parents of infants born at term.We have developed an intervention programme called TEDI-Prem (Telehealth for Early Developmental Intervention in babies born very preterm) based on previous research.

View Article and Find Full Text PDF

Introduction: Communication disorders are one of the most common disorders that, if not treated in childhood, can cause many social, educational, and psychological problems in adulthood. One of the technologies that can be helpful in these disorders is mobile health (m-Health) technology. This study aims to examine the attitude and willingness to use this technology and compare the advantages and challenges of this technology and face-to-face treatment from the perspective of patients.

View Article and Find Full Text PDF

In this paper, we introduce FUSION-ANN, a novel artificial neural network (ANN) designed for acoustic emission (AE) signal classification. FUSION-ANN comprises four distinct ANN branches, each housing an independent multilayer perceptron. We extract denoised features of speech recognition such as linear predictive coding, Mel-frequency cepstral coefficient, and gammatone cepstral coefficient to represent AE signals.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!