When we vocalize, our brain distinguishes self-generated sounds from external ones. A corollary discharge signal supports this function in animals; however, in humans, its exact origin and temporal dynamics remain unknown. We report electrocorticographic recordings in neurosurgical patients and a connectivity analysis framework based on Granger causality that reveals major neural communications.
View Article and Find Full Text PDFSentence production is the uniquely human ability to transform complex thoughts into strings of words. Despite the importance of this process, language production research has primarily focused on single words. It remains an untested assumption that insights from this literature generalize to more naturalistic utterances like sentences.
View Article and Find Full Text PDFConvolutional neural networks (CNN) show great promise for translating decades of research on structural abnormalities in temporal lobe epilepsy into clinical practice. Three-dimensional CNNs typically outperform two-dimensional CNNs in medical imaging. Here we explore for the first time whether a three-dimensional CNN outperforms a two-dimensional CNN for identifying temporal lobe epilepsy-specific features on MRI.
View Article and Find Full Text PDFAcross the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive.
View Article and Find Full Text PDF