Publications by authors named "John Magnotti"

Alteration of responses to salient stimuli occurs in a wide range of brain disorders and may be rooted in pathophysiological brain state dynamics. Specifically, tonic and phasic modes of activity in the reticular activating system (RAS) influence, and are influenced by, salient stimuli, respectively. The RAS influences the spectral characteristics of activity in the neocortex, shifting the balance between low- and high-frequency fluctuations.

View Article and Find Full Text PDF

In the McGurk effect, presentation of incongruent auditory and visual speech evokes a fusion percept different than either component modality. We show that repeatedly experiencing the McGurk effect for 14 days induces a change in auditory-only speech perception: the auditory component of the McGurk stimulus begins to evoke the fusion percept, even when presented on its own without accompanying visual speech. This perceptual change, termed fusion-induced recalibration (FIR), was talker-specific and syllable-specific and persisted for a year or more in some participants without any additional McGurk exposure.

View Article and Find Full Text PDF

To mitigate limitations of self-reported mood assessments, we introduce a novel affective bias task. The task quantifies instantaneous emotional state by leveraging the phenomenon of affective bias, in which people interpret external emotional stimuli in a manner consistent with their current emotional state. This study establishes task stability in measuring and tracking depressive symptoms in clinical and nonclinical populations.

View Article and Find Full Text PDF

In the McGurk effect, visual speech from the face of the talker alters the perception of auditory speech. The diversity of human languages has prompted many intercultural studies of the effect in both Western and non-Western cultures, including native Japanese speakers. Studies of large samples of native English speakers have shown that the McGurk effect is characterized by high variability in the susceptibility of different individuals to the illusion and in the strength of different experimental stimuli to induce the illusion.

View Article and Find Full Text PDF

The prevalence of synthetic talking faces in both commercial and academic environments is increasing as the technology to generate them grows more powerful and available. While it has long been known that seeing the face of the talker improves human perception of speech-in-noise, recent studies have shown that synthetic talking faces generated by deep neural networks (DNNs) are also able to improve human perception of speech-in-noise. However, in previous studies the benefit provided by DNN synthetic faces was only about half that of real human talkers.

View Article and Find Full Text PDF

The prevalence of synthetic talking faces in both commercial and academic environments is increasing as the technology to generate them grows more powerful and available. While it has long been known that seeing the face of the talker improves human perception of speech-in-noise, recent studies have shown that synthetic talking faces generated by deep neural networks (DNNs) are also able to improve human perception of speech-in-noise. However, in previous studies the benefit provided by DNN synthetic faces was only about half that of real human talkers.

View Article and Find Full Text PDF
Article Synopsis
  • - The paper discusses the importance of three key components—archives, standards, and analysis tools—for effective data sharing in neuroinformatics, particularly in neurophysiology.
  • - It compares four free data repositories: DABI, DANDI, OpenNeuro, and Brain-CODE, which help researchers store, share, and analyze neurophysiology data from both humans and animals.
  • - The use of common standards like BIDS and NWB is emphasized to enhance data accessibility, while the article also highlights the need for advanced analytical tools in these platforms to support large-scale data analysis in neuroscience.
View Article and Find Full Text PDF

Intracranial electroencephalography (iEEG) provides a unique opportunity to record and stimulate neuronal populations in the human brain. A key step in neuroscience inference from iEEG is localizing the electrodes relative to individual subject anatomy and identified regions in brain atlases. We describe a new software tool, Your Advanced Electrode Localizer (YAEL), that provides an integrated solution for every step of the electrode localization process.

View Article and Find Full Text PDF

Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations.

View Article and Find Full Text PDF

As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. The aim of this review is to describe archives that provide researchers with tools to store, share, and reanalyze both human and non-human neurophysiology data based on criteria that are of interest to the neuroscientific community.

View Article and Find Full Text PDF

Lesion-behavior mapping (LBM) provides a statistical map of the association between voxel-wise brain damage and individual differences in behavior. To understand whether two behaviors are mediated by damage to distinct regions, researchers often compare LBM weight outputs by either the Overlap method or the Correlation method. However, these methods lack statistical criteria to determine whether two LBM are distinct versus the same and are disconnected from a major goal of LBMs: predicting behavior from brain damage.

View Article and Find Full Text PDF
Article Synopsis
  • Emotion is processed in the brain's affective salience network (ASN), which includes areas like the dorsal anterior cingulate (dACC), anterior insula, and ventral-medial prefrontal cortex (vmPFC) that are sensitive to the intensity of emotions, while the amygdala mainly focuses on intensity rather than valence (positive or negative feelings).
  • A new analysis method called 'specparam' helped identify specific brain areas involved in emotional interpretation, showing that the dACC and vmPFC are predictors of how personal mood affects the perception of facial expressions.
  • Experimental stimulation of the dACC changed how participants rated emotional faces, suggesting this area plays a causal role in processing emotional stimuli and affecting mood.
View Article and Find Full Text PDF

In neurosurgery, spatial normalization emerged as a tool to minimize inter-subject variability and study target point locations based on standard coordinates. The Montreal Neurological Institute's 152 brain template (MNI152) has become the most widely utilized in neuroimaging studies, but has been noted to introduce partial volume effects, distortions, and increase structure size in all directions (x/y/z axes). These discrepancies question the accuracy of the MNI template, as well as its utility for studies that examine and form conclusions from group-level data.

View Article and Find Full Text PDF

Tests of visuospatial memory following short (<1 s) and medium (1 to 30 s) delays have revealed characteristically different patterns of behavior in humans. These data have been interpreted as evidence for different memory systems operating during short (iconic memory) and long delays (working memory). Leising et al.

View Article and Find Full Text PDF

This paper is motivated by studying differential brain activities to multiple experimental condition presentations in intracranial electroencephalography (iEEG) experiments. Contrasting effects of experimental conditions are often zero in most regions and nonzero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports.

View Article and Find Full Text PDF

Objective: Magnetoencephalography (MEG) is a useful component of the presurgical evaluation of patients with epilepsy. Due to its high spatiotemporal resolution, MEG often provides additional information to the clinician when forming hypotheses about the epileptogenic zone (EZ). Because of the increasing utilization of stereo-electroencephalography (sEEG), MEG clusters are used to guide sEEG electrode targeting with increasing frequency.

View Article and Find Full Text PDF

concepts require individuals to identify relationships between novel stimuli. Previous studies have reported that the ability to learn abstract concepts is found in a wide range of species. In regard to a same/different concept, Clark's nutcrackers () and black-billed magpies (), two corvid species, were shown to outperform other avian and primate species (Wright et al.

View Article and Find Full Text PDF

The McGurk effect is a widely used measure of multisensory integration during speech perception. Two observations have raised questions about the validity of the effect as a tool for understanding speech perception. First, there is high variability in perception of the McGurk effect across different stimuli and observers.

View Article and Find Full Text PDF

Direct recording of neural activity from the human brain using implanted electrodes (iEEG, intracranial electroencephalography) is a fast-growing technique in human neuroscience. While the ability to record from the human brain with high spatial and temporal resolution has advanced our understanding, it generates staggering amounts of data: a single patient can be implanted with hundreds of electrodes, each sampled thousands of times a second for hours or days. The difficulty of exploring these vast datasets is the rate-limiting step in discovery.

View Article and Find Full Text PDF

Experimentalists studying multisensory integration compare neural responses to multisensory stimuli with responses to the component modalities presented in isolation. This procedure is problematic for multisensory speech perception since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is not. To overcome this confound, we developed intracranial encephalography (iEEG) deconvolution.

View Article and Find Full Text PDF

Background And Purpose: The current tools available for localization of expressive language, including functional magnetic resonance imaging (fMRI) and cortical stimulation mapping (CSM), require that the patient remain stationary and follow language commands with precise timing. Many pediatric epilepsy patients, however, have intact language skills but are unable to participate in these tasks due to cognitive impairments or young age. In adult subjects, there is evidence that language laterality can be determined by resting state (RS) fMRI activity, however there are few studies on the use of RS to accurately predict language laterality in children.

View Article and Find Full Text PDF

A visual cortical prosthesis (VCP) has long been proposed as a strategy for restoring useful vision to the blind, under the assumption that visual percepts of small spots of light produced with electrical stimulation of visual cortex (phosphenes) will combine into coherent percepts of visual forms, like pixels on a video screen. We tested an alternative strategy in which shapes were traced on the surface of visual cortex by stimulating electrodes in dynamic sequence. In both sighted and blind participants, dynamic stimulation enabled accurate recognition of letter shapes predicted by the brain's spatial map of the visual world.

View Article and Find Full Text PDF

Although we routinely experience complex tactile patterns over our entire body, how we selectively experience multisite touch over our bodies remains poorly understood. Here, we characterized tactile search behavior over the full body using a tactile analog of the classic visual search task. On each trial, participants judged whether a target stimulus (e.

View Article and Find Full Text PDF

Human faces contain dozens of visual features, but viewers preferentially fixate just two of them: the eyes and the mouth. Face-viewing behavior is usually studied by manually drawing regions of interest (ROIs) on the eyes, mouth, and other facial features. ROI analyses are problematic as they require arbitrary experimenter decisions about the location and number of ROIs, and they discard data because all fixations within each ROI are treated identically and fixations outside of any ROI are ignored.

View Article and Find Full Text PDF

Multisensory integration of information from the talker's voice and the talker's mouth facilitates human speech perception. A popular assay of audiovisual integration is the McGurk effect, an illusion in which incongruent visual speech information categorically changes the percept of auditory speech. There is substantial interindividual variability in susceptibility to the McGurk effect.

View Article and Find Full Text PDF