Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2851798PMC
http://dx.doi.org/10.1073/pnas.1000186107DOI Listing

Publication Analysis

Top Keywords

articulatory commands
4
commands automatically
4
automatically involuntarily
4
involuntarily activated
4
activated speech
4
speech perception?
4
articulatory
1
automatically
1
involuntarily
1
activated
1

Similar Publications

Does pre-speech auditory modulation reflect processes related to feedback monitoring or speech movement planning?

Neurosci Lett

November 2024

Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd Street, Seattle, WA 98105-6246, United States. Electronic address:

Previous studies have revealed that auditory processing is modulated during the planning phase immediately prior to speech onset. To date, the functional relevance of this pre-speech auditory modulation (PSAM) remains unknown. Here, we investigated whether PSAM reflects neuronal processes that are associated with preparing auditory cortex for optimized feedback monitoring as reflected in online speech corrections.

View Article and Find Full Text PDF

Previous studies have revealed that auditory processing is modulated during the planning phase immediately prior to speech onset. To date, the functional relevance of this pre-speech auditory modulation (PSAM) remains unknown. Here, we investigated whether PSAM reflects neuronal processes that are associated with preparing auditory cortex for optimized feedback monitoring as reflected in online speech corrections.

View Article and Find Full Text PDF

Word frequency has similar effects in picture naming and gender decision: A failure to replicate Jescheniak and Levelt (1994).

Acta Psychol (Amst)

November 2023

Psychology of Language Department, Max Planck Institute for Psycholinguistics, The Netherlands; Donders Centre of Cognition and Cognitive Neuroscience, Radboud University, The Netherlands.

Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms.

View Article and Find Full Text PDF

Neural control of lexical tone production in human laryngeal motor cortex.

Nat Commun

October 2023

Department of Neurological Surgery, University of California, San Francisco, CA, 94143, USA.

Article Synopsis
  • Tonal languages, spoken by about one-third of the global population, use pitch control to distinguish words with different meanings through specific patterns called lexical tones.
  • A study utilizing high-density direct cortical recordings from native Mandarin speakers found that the laryngeal motor cortex encodes the movement information needed for producing these pitch dynamics rather than categorizing tones.
  • Two distinct activity patterns in the laryngeal motor cortex were identified for pitch rising and lowering, with direct brain stimulation confirming their influence on tone production.
View Article and Find Full Text PDF

Silent speech interfaces have been pursued to restore spoken communication for individuals with voice disorders and to facilitate intuitive communications when acoustic-based speech communication is unreliable, inappropriate, or undesired. However, the current methodology for silent speech faces several challenges, including bulkiness, obtrusiveness, low accuracy, limited portability, and susceptibility to interferences. In this work, we present a wireless, unobtrusive, and robust silent speech interface for tracking and decoding speech-relevant movements of the temporomandibular joint.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!