The effects of self-generated synchronous and asynchronous visual speech feedback on overt stuttering frequency.

J Commun Disord

The Laboratory for Stuttering Research, Department of Communication Sciences & Disorders, University of Mississippi, University, MS 38677, USA.

Published: June 2009

Purpose: Relatively recent research documents that visual choral speech, which represents an externally generated form of synchronous visual speech feedback, significantly enhanced fluency in those who stutter. As a consequence, it was hypothesized that self-generated synchronous and asynchronous visual speech feedback would likewise enhance fluency. Therefore, the purpose of this study was to investigate the effects of self-generated visual feedback (i.e., synchronous speech feedback with a mirror and asynchronous speech feedback via delayed visual feedback) on overt stuttering frequency in those who stutter.

Method: Eight people who stutter (4 males, 4 females), ranging from 18 to 42 years of age participated in this study. Due to the nature of visual speech feedback, the speaking task required that participants recite memorized phrases in control and experimental speaking conditions so that visual attention could be focused on the speech feedback, rather than a written passage. During experimental conditions, participants recited memorized phrases while simultaneously focusing on the movement of their lips, mouth, and jaw within their own synchronous (i.e., mirror) and asynchronous (i.e., delayed video signal) visual speech feedback.

Results: Results indicated that the self-generated visual feedback speaking conditions significantly decreased stuttering frequency (Greenhouse-Geisser p=.000); post hoc orthogonal comparisons revealed no significant differences in stuttering frequency reduction between the synchronous and asynchronous visual feedback speaking conditions (p=.2554).

Conclusions: These data suggest that synchronous and asynchronous self-generated visual speech feedback is associated with significant reductions in overt stuttering frequency. Study results were discussed relative to existing theoretical models of fluency-enhancement via speech feedback, such as the engagement of mirror neuron networks, the EXPLAN model, and the Dual Premotor System Hypothesis. Further research in the area of self-generated visual speech feedback, as well as theoretical constructs accounting for how exposure to a multi-sensory speech feedback enhances fluency, is warranted.

Learning Outcomes: : Readers will be able to (1) discuss the multi-sensory nature of fluency-enhancing speech feedback, (2) compare and contrast synchronous and asynchronous self-generated and externally generated visual speech feedback, and (3) compare and contrast self-generated and externally generated visual speech feedback.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jcomdis.2009.02.002DOI Listing

Publication Analysis

Top Keywords

speech feedback
56
visual speech
36
synchronous asynchronous
20
stuttering frequency
20
feedback
18
speech
16
self-generated visual
16
visual feedback
16
visual
15
asynchronous visual
12

Similar Publications

Prosodic Modifications to Challenging Communicative Environments in Preschoolers.

Lang Speech

January 2025

Department of Educational Psychology, Leadership, & Counseling, Texas Tech University, USA.

Adapting one's speaking style is particularly crucial as children start interacting with diverse conversational partners in various communication contexts. The study investigated the capacity of preschool children aged 3-5 years ( = 28) to modify their speaking styles in response to background noise, referred to as noise-adapted speech, and when talking to an interlocutor who pretended to have hearing loss, referred to as clear speech. We examined how two modified speaking styles differed across the age range.

View Article and Find Full Text PDF

The Dysphagia Handicap Index (DHI) is commonly utilized for evaluating how dysphagia impacts the quality of life (QoL) of patients across physical, functional, and emotional dimensions. The primary aim of the research was to linguistically validate and culturally adapt the DHI to the Bangla version. A cross-sectional study design was chosen, with Beaton's protocol as the guiding framework for validating and adapting the DHI.

View Article and Find Full Text PDF

Probing Sensorimotor Memory through the Human Speech-Audiomotor System.

J Neurophysiol

December 2024

Yale Child Study Center, Yale School of Medicine, Yale University, New Haven, CT, USA.

Our knowledge of human sensorimotor learning and memory is predominantly based on the visuo-spatial workspace and limb movements. Humans also have a remarkable ability to produce and perceive speech sounds. We asked if the human speech-auditory system could serve as a model to characterize retention of sensorimotor memory in a workspace which is functionally independent of the visuo-spatial one.

View Article and Find Full Text PDF

SpeechMatch-A novel digital approach to supporting communication for neurodiverse groups.

Healthc Technol Lett

December 2024

Department of Intellectual Disability Neuropsychiatry, Research Team Cornwall Partnership NHS Foundation Trust Truro UK.

Communication can be a challenge for a significant minority of the population. Those with intellectual disability, autism, or Stroke survivors can encounter significant problems and stigma in their communication abilities leading to worse health and social outcomes. SpeechMatch (https://www.

View Article and Find Full Text PDF

Objectives: To date, there has yet to be a rigorous exploration of voice and communication modification training (VCMT) among transgender and gender-nonconforming (TGNC) individuals using digital technology. We sought to evaluate and describe the iterative process of app development using a community-based approach.

Methods: An interprofessional team of voice health care professionals, application developers, designers, and TGNC community members was assembled to conceive the functionality, content, and design of a mobile app to support VCMT for TGNC people.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!