Neuronal activity in the human lateral temporal lobe. II. Responses to the subjects own voice.

Exp Brain Res

Department of Neurobiology, Max-Planck-Institute for Biophysical Chemistry, Göttingen-Nikolausberg, Federal Republic of Germany.

Published: December 1989

We have recorded neuronal responses in the lateral temporal lobe of man to overt speech during open brain surgery for epilepsy. Tests included overt naming of objects and reading words or short sentences shown on a projector screen, repetition of tape recorded words or sentences presented over a loudspeaker, and free conversation. Neuronal activity in the dominant and non-dominant temporal lobe were about equally affected by overt speech. As during listening to language (see Creutzfeldt et al. 1989), responses differed between recordings from sites in the superior and the middle or inferior temporal gyrus. In the superior temporal gyrus all neurons responded clearly and each in a characteristic manner. Activation could be related to phonemic aspects, to segmentation or to the length of spoken words or sentences. However, neurons were mostly differently affected by listening to words and language as compared to overt speaking. In neuronal populations recorded simultaneously with one or two microelectrodes, some neurons responded predominantly to one or the other type of speech. Excitatory responses during overt speaking were always auditory. In the middle temporal gyrus more neurons (about 2/3) responded to overt speaking than to listening alone. Activations elicited during overt speech were seen in about 1/3 of our sample, but they were more sluggish than those recorded in the superior gyrus. A prominent feature was suppression of on-going activity, which we found in about 1/3 of middle and in some superior temporal gyrus neurons. This suppression could precede vocalization by up to a few hundred ms, and could outlast it by up to 1 s. Evoked ECoG-potentials to words heard or spoken were different, and those to overt speech were more widespread.

Download full-text PDF

Source
http://dx.doi.org/10.1007/BF00249601DOI Listing

Publication Analysis

Top Keywords

overt speech
16
temporal gyrus
16
temporal lobe
12
gyrus neurons
12
overt speaking
12
neuronal activity
8
lateral temporal
8
overt
8
listening language
8
superior temporal
8

Similar Publications

Private speech is a tool through which children self-regulate. The regulatory content of children's overt private speech is associated with response to task difficulty and task performance. Parenting is proposed to play a role in the development of private speech as co-regulatory interactions become represented by the child as private speech to regulate thinking and behaviour.

View Article and Find Full Text PDF
Article Synopsis
  • Drug-induced stuttering is an acquired speech disorder caused by certain medications, resembling developmental stuttering, and has been primarily studied through case reports and adverse drug reactions.
  • A recent study analyzed electronic health records from a major medical center to identify and classify drugs linked to this type of stuttering, reviewing 40 suspected cases.
  • The findings revealed that 18 different drugs were associated with stuttering in 22 individuals, especially in the classes of antiseizure agents, CNS stimulants, and antidepressants, with topiramate being the most commonly implicated drug; the study emphasizes the need for better documentation of medication-related speech issues in EHRs.
View Article and Find Full Text PDF

Sentence production is the uniquely human ability to transform complex thoughts into strings of words. Despite the importance of this process, language production research has primarily focused on single words. It remains an untested assumption that insights from this literature generalize to more naturalistic utterances like sentences.

View Article and Find Full Text PDF

Continuous and discrete decoding of overt speech with scalp electroencephalography (EEG).

J Neural Eng

October 2024

Electrical and Computer Engineering, University of Houston, N308 Engineering Building I, Houston, Texas, 77204-4005, UNITED STATES.

Article Synopsis
  • * This research investigates the use of non-invasive EEG to develop speech Brain-Computer Interfaces (BCIs) that decode speech features directly, aiming for a more natural communication method.
  • * Deep learning models, such as CNNs and RNNs, were tested for speech decoding tasks, showing significant success in distinguishing both discrete and continuous speech elements, while also indicating the importance of specific EEG frequency bands for performance.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!