Publications by authors named "Gregory B Cogan"

Patients suffering from debilitating neurodegenerative diseases often lose the ability to communicate, detrimentally affecting their quality of life. One solution to restore communication is to decode signals directly from the brain to enable neural speech prostheses. However, decoding has been limited by coarse neural recordings which inadequately capture the rich spatio-temporal structure of human brain signals.

View Article and Find Full Text PDF

Objective: Effective surgical treatment of drug-resistant epilepsy depends on accurate localization of the epileptogenic zone (EZ). High-frequency oscillations (HFOs) are potential biomarkers of the EZ. Previous research has shown that HFOs often occur within submillimeter areas of brain tissue and that the coarse spatial sampling of clinical intracranial electrode arrays may limit the accurate capture of HFO activity.

View Article and Find Full Text PDF

Segmenting the continuous speech stream into units for further perceptual and linguistic analyses is fundamental to speech recognition. The speech amplitude envelope (SE) has long been considered a fundamental temporal cue for segmenting speech. Does the temporal fine structure (TFS), a significant part of speech signals often considered to contain primarily spectral information, contribute to speech segmentation? Using magnetoencephalography, we show that the TFS entrains cortical responses between 3 and 6 Hz and demonstrate, using mutual information analysis, that (i) the temporal information in the TFS can be reconstructed from a measure of frame-to-frame spectral change and correlates with the SE and (ii) that spectral resolution is key to the extraction of such temporal information.

View Article and Find Full Text PDF

Verbal working memory (vWM) involves storing and manipulating information in phonological sensory input. An influential theory of vWM proposes that manipulation is carried out by a central executive while storage is performed by two interacting systems: a phonological input buffer that captures sound-based information and an articulatory rehearsal system that controls speech motor output. Whether, when and how neural activity in the brain encodes these components remains unknown.

View Article and Find Full Text PDF

The motor cortex in the brain tracks lip movements to help with speech perception.

View Article and Find Full Text PDF

With a few exceptions, the literature on face recognition and its neural basis derives from the presentation of single faces. However, in many ecologically typical situations, we see more than one face, in different communicative contexts. One of the principal ways in which we interact using our faces is kissing.

View Article and Find Full Text PDF

Historically, the study of speech processing has emphasized a strong link between auditory perceptual input and motor production output. A kind of 'parity' is essential, as both perception- and production-based representations must form a unified interface to facilitate access to higher-order language processes such as syntax and semantics, believed to be computed in the dominant, typically left hemisphere. Although various theories have been proposed to unite perception and production, the underlying neural mechanisms are unclear.

View Article and Find Full Text PDF

Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance.

View Article and Find Full Text PDF

Recent work has implicated low-frequency (<20 Hz) neuronal phase information as important for both auditory (<10 Hz) and speech [theta (∼4-8 Hz)] perception. Activity on the timescale of theta corresponds linguistically to the average length of a syllable, suggesting that information within this range has consequences for segmentation of meaningful units of speech. Longer timescales that correspond to lower frequencies [delta (1-3 Hz)] also reflect important linguistic features-prosodic/suprasegmental-but it is unknown whether the patterns of activity in this range are similar to theta.

View Article and Find Full Text PDF

A PHP Error was encountered

Severity: 8192

Message: str_replace(): Passing null to parameter #3 ($subject) of type array|string is deprecated

Filename: helpers/my_audit_helper.php

Line Number: 8900

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 8900
Function: str_replace

File: /var/www/html/application/controllers/Author.php
Line: 786
Function: formatAIDetailSummary

File: /var/www/html/application/controllers/Author.php
Line: 685
Function: pubMedSearchtoAuthorResults_array

File: /var/www/html/application/controllers/Author.php
Line: 122
Function: pubMedAuthorSearch_array

File: /var/www/html/index.php
Line: 316
Function: require_once