Purpose: The purpose of this study was to assess cortical hemodynamic response patterns in 3- to 7-year-old children listening to two speech modes: normally vocalized and whispered speech. Understanding whispered speech requires processing of the relatively weak, noisy signal, as well as the cognitive ability to understand the speaker's reason for whispering.

Method: Near-infrared spectroscopy (NIRS) was used to assess changes in cortical oxygenated hemoglobin from 16 typically developing children.

Results: A profound difference in oxygenated hemoglobin levels between the speech modes was found over left ventral sensorimotor cortex. In particular, over areas that represent speech articulatory body parts and motion, such as the larynx, lips, and jaw, oxygenated hemoglobin was higher for whisper than for normal speech. The weaker stimulus, in terms of sound energy, thus induced the more profound hemodynamic response. This, moreover, occurred over areas involved in speech articulation, even though the children did not overtly articulate speech during measurements.

Conclusion: Because whisper is a special form of communication not often used in daily life, we suggest that the hemodynamic response difference over left ventral sensorimotor cortex resulted from inner (covert) practice or imagination of the different articulatory actions necessary to produce whisper as opposed to normal speech.

Download full-text PDF

Source
http://dx.doi.org/10.1044/2016_JSLHR-H-15-0435DOI Listing

Publication Analysis

Top Keywords

whispered speech
12
hemodynamic response
12
oxygenated hemoglobin
12
speech
10
near-infrared spectroscopy
8
cortical hemodynamic
8
7-year-old children
8
speech modes
8
left ventral
8
ventral sensorimotor
8

Similar Publications

Multilingual Prediction of Cognitive Impairment with Large Language Models and Speech Analysis.

Brain Sci

December 2024

School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA.

Background: Cognitive impairment poses a significant global health challenge, emphasizing the critical need for early detection and intervention. Traditional diagnostics like neuroimaging and clinical evaluations are often subjective, costly, and inaccessible, especially in resource-poor settings. Previous research has focused on speech analysis primarily conducted using English data, leaving multilingual settings unexplored.

View Article and Find Full Text PDF

Introduction: The 2024 Voice AI Symposium, hosted by the Bridge2AI-Voice Consortium in Tampa, FL, featured two keynote speeches that addressed the intersection of voice AI, healthcare, ethics, and law. Dr. Rupal Patel and Dr.

View Article and Find Full Text PDF

Objectives: As artificial intelligence evolves, integrating speech processing into home healthcare (HHC) workflows is increasingly feasible. Audio-recorded communications enhance risk identification models, with automatic speech recognition (ASR) systems as a key component. This study evaluates the transcription accuracy and equity of 4 ASR systems-Amazon Web Services (AWS) General, AWS Medical, Whisper, and Wave2Vec-in transcribing patient-nurse communication in US HHC, focusing on their ability in accurate transcription of speech from Black and White English-speaking patients.

View Article and Find Full Text PDF

Characterization of upper esophageal sphincter pressures relative to vocal acoustics.

J Appl Physiol (1985)

January 2025

Department of Otolaryngology, University of Minnesota, Minneapolis, Minnesota, United States.

Strength of vocal fold adduction has been hypothesized to be a critical factor influencing vocal acoustics but has been difficult to measure directly during phonation. Recent work has suggested that upper esophageal sphincter (UES) pressure, which can be easily assessed, increases with stronger vocal fold adduction, raising the possibility that UES pressure might indirectly reflect vocal fold adduction strength. However, concurrent UES pressure and vocal acoustics have not previously been examined across different vocal tasks.

View Article and Find Full Text PDF

Two competing accounts propose that the disruption of short-term memory by irrelevant speech arises either due to interference-by-process (e.g., changing-state effect) or attentional capture, but it is unclear how whispering affects the irrelevant speech effect.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!