Purpose: The purpose of this study was to assess cortical hemodynamic response patterns in 3- to 7-year-old children listening to two speech modes: normally vocalized and whispered speech. Understanding whispered speech requires processing of the relatively weak, noisy signal, as well as the cognitive ability to understand the speaker's reason for whispering.
Method: Near-infrared spectroscopy (NIRS) was used to assess changes in cortical oxygenated hemoglobin from 16 typically developing children.
Results: A profound difference in oxygenated hemoglobin levels between the speech modes was found over left ventral sensorimotor cortex. In particular, over areas that represent speech articulatory body parts and motion, such as the larynx, lips, and jaw, oxygenated hemoglobin was higher for whisper than for normal speech. The weaker stimulus, in terms of sound energy, thus induced the more profound hemodynamic response. This, moreover, occurred over areas involved in speech articulation, even though the children did not overtly articulate speech during measurements.
Conclusion: Because whisper is a special form of communication not often used in daily life, we suggest that the hemodynamic response difference over left ventral sensorimotor cortex resulted from inner (covert) practice or imagination of the different articulatory actions necessary to produce whisper as opposed to normal speech.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1044/2016_JSLHR-H-15-0435 | DOI Listing |
Brain Sci
December 2024
School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA.
Background: Cognitive impairment poses a significant global health challenge, emphasizing the critical need for early detection and intervention. Traditional diagnostics like neuroimaging and clinical evaluations are often subjective, costly, and inaccessible, especially in resource-poor settings. Previous research has focused on speech analysis primarily conducted using English data, leaving multilingual settings unexplored.
View Article and Find Full Text PDFFront Digit Health
December 2024
Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, United States.
Introduction: The 2024 Voice AI Symposium, hosted by the Bridge2AI-Voice Consortium in Tampa, FL, featured two keynote speeches that addressed the intersection of voice AI, healthcare, ethics, and law. Dr. Rupal Patel and Dr.
View Article and Find Full Text PDFJAMIA Open
December 2024
Center for Home Care Policy & Research, VNS Health, New York, NY 10017, United States.
Objectives: As artificial intelligence evolves, integrating speech processing into home healthcare (HHC) workflows is increasingly feasible. Audio-recorded communications enhance risk identification models, with automatic speech recognition (ASR) systems as a key component. This study evaluates the transcription accuracy and equity of 4 ASR systems-Amazon Web Services (AWS) General, AWS Medical, Whisper, and Wave2Vec-in transcribing patient-nurse communication in US HHC, focusing on their ability in accurate transcription of speech from Black and White English-speaking patients.
View Article and Find Full Text PDFJ Appl Physiol (1985)
January 2025
Department of Otolaryngology, University of Minnesota, Minneapolis, Minnesota, United States.
Strength of vocal fold adduction has been hypothesized to be a critical factor influencing vocal acoustics but has been difficult to measure directly during phonation. Recent work has suggested that upper esophageal sphincter (UES) pressure, which can be easily assessed, increases with stronger vocal fold adduction, raising the possibility that UES pressure might indirectly reflect vocal fold adduction strength. However, concurrent UES pressure and vocal acoustics have not previously been examined across different vocal tasks.
View Article and Find Full Text PDFJ Acoust Soc Am
November 2024
School of Psychology and Humanities, University of Central Lancashire, Preston, PR1 2HE, United Kingdom.
Two competing accounts propose that the disruption of short-term memory by irrelevant speech arises either due to interference-by-process (e.g., changing-state effect) or attentional capture, but it is unclear how whispering affects the irrelevant speech effect.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!