Publications by authors named "Jeremy I Skipper"

Language is acquired and processed in complex and dynamic naturalistic contexts, involving the simultaneous processing of connected speech, faces, bodies, objects, etc. How words and their associated concepts are encoded in the brain during real-world processing is still unknown. Here, the representational structure of concrete and abstract concepts was investigated during movie watching to address the extent to which brain responses dynamically change depending on visual context.

View Article and Find Full Text PDF
Article Synopsis
  • - The text discusses the Magic, Memory, and Curiosity (MMC) Dataset, created to study how magic tricks can reveal insights about the human mind by disrupting viewer expectations and attention.
  • - The dataset includes data from 50 participants who watched 36 specially edited magic tricks during fMRI experiments, with assessments of curiosity and memory conducted both immediately and a week later.
  • - The high-quality behavioral and fMRI data allow researchers to investigate complex cognitive and motivational processes, providing a robust resource for deeper analysis in the fields of psychology and neuroscience.
View Article and Find Full Text PDF

Visual hallucinations can be phenomenologically divided into those of a simple or complex nature. Both simple and complex hallucinations can occur in pathological and non-pathological states, and can also be induced experimentally by visual stimulation or deprivation-for example using a high-frequency, eyes-open flicker (Ganzflicker) and perceptual deprivation (Ganzfeld). Here we leverage the differences in visual stimulation that these two techniques involve to investigate the role of bottom-up and top-down processes in shifting the complexity of visual hallucinations, and to assess whether these techniques involve a shared underlying hallucinatory mechanism despite their differences.

View Article and Find Full Text PDF

Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e.

View Article and Find Full Text PDF

Rodent and human studies have implicated an amygdala-prefrontal circuit during threat processing. One possibility is that while amygdala activity underlies core features of anxiety (e.g.

View Article and Find Full Text PDF

We consider the challenges in extracting stimulus-related neural dynamics from other intrinsic processes and noise in naturalistic functional magnetic resonance imaging (fMRI). Most studies rely on inter-subject correlations (ISC) of low-level regional activity and neglect varying responses in individuals. We propose a novel, data-driven approach based on low-rank plus sparse ( [Formula: see text]) decomposition to isolate stimulus-driven dynamic changes in brain functional connectivity (FC) from the background noise, by exploiting shared network structure among subjects receiving the same naturalistic stimuli.

View Article and Find Full Text PDF

It is assumed that there are a static set of "language regions" in the brain. Yet, language comprehension engages regions well beyond these, and patients regularly produce familiar "formulaic" expressions when language regions are severely damaged. These suggest that the neurobiology of language is not fixed but varies with experiences, like the extent of word sequence learning.

View Article and Find Full Text PDF

The role of the cerebellum in speech perception remains a mystery. Given its uniform architecture, we tested the hypothesis that it implements a domain-general predictive mechanism whose role in speech is determined by connectivity. We collated all neuroimaging studies reporting cerebellar activity in the Neurosynth database (n = 8206).

View Article and Find Full Text PDF

The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram.

View Article and Find Full Text PDF

It is well established that speech perception is improved when we are able to see the speaker talking along with hearing their voice, especially when the speech is noisy. While we have a good understanding of where speech integration occurs in the brain, it is unclear how visual and auditory cues are combined to improve speech perception. One suggestion is that integration can occur as both visual and auditory cues arise from a common generator: the vocal tract.

View Article and Find Full Text PDF

Neuroimaging has advanced our understanding of human psychology using reductionist stimuli that often do not resemble information the brain naturally encounters. It has improved our understanding of the network organization of the brain mostly through analyses of 'resting-state' data for which the functions of networks cannot be verifiably labelled. We make a 'Naturalistic Neuroimaging Database' (NNDb v1.

View Article and Find Full Text PDF

Stories play a fundamental role in human culture. They provide a mechanism for sharing cultural identity, imparting knowledge, revealing beliefs, reinforcing social bonds and providing entertainment that is central to all human societies. Here we investigated the extent to which the delivery medium of a story (audio or visual) affected self-reported and physiologically measured engagement with the narrative.

View Article and Find Full Text PDF

Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks.

View Article and Find Full Text PDF

What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context.

View Article and Find Full Text PDF

In this review, we consider the literature on sensitive periods for language acquisition from the perspective of the stroke recovery literature treated in this Special Issue. Conceptually, the two areas of study are linked in a number of ways. For example, the fact that learning itself can set the stage for future failures to learn (in second language learning) or to remediate (as described in constraint therapy) is an important insight in both areas, as is the increasing awareness that limits on learning can be overcome by creating the appropriate environmental context.

View Article and Find Full Text PDF

During a conversation, we hear the sound of the talker as well as the intended message. Traditional models of speech perception posit that acoustic details of a talker's voice are not encoded with the message whereas more recent models propose that talker identity is automatically encoded. When shadowing speech, listeners often fail to detect a change in talker identity.

View Article and Find Full Text PDF

Functional magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in which a stimulus is presented repeatedly to conditions in which multiple stimuli are presented. This approach has established that a set of superior temporal and inferior parietal regions respond more strongly to conditions containing stimulus change. Here, we examine whether this contrast is driven by habituation to a repeating condition or by selective responding to change.

View Article and Find Full Text PDF

Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition.

View Article and Find Full Text PDF

Although the linguistic structure of speech provides valuable communicative information, nonverbal behaviors can offer additional, often disambiguating cues. In particular, being able to see the face and hand movements of a speaker facilitates language comprehension [1]. But how does the brain derive meaningful information from these movements? Mouth movements provide information about phonological aspects of speech [2-3].

View Article and Find Full Text PDF

Is there a neural representation of speech that transcends its sensory properties? Using fMRI, we investigated whether there are brain areas where neural activity during observation of sublexical audiovisual input corresponds to a listener's speech percept (what is "heard") independent of the sensory properties of the input. A target audiovisual stimulus was preceded by stimuli that (1) shared the target's auditory features (auditory overlap), (2) shared the target's visual features (visual overlap), or (3) shared neither the target's auditory or visual features but were perceived as the target (perceptual overlap). In two left-hemisphere regions (pars opercularis, planum polare), the target invoked less activity when it was preceded by the perceptually overlapping stimulus than when preceded by stimuli that shared one of its sensory components.

View Article and Find Full Text PDF
Article Synopsis
  • - The growth of neuroimaging research has led to a need for robust computational infrastructures that can handle large data sets efficiently while ensuring secure, collaborative analysis with minimal management effort.
  • - The proposed solution utilizes open source database management systems, which allow for complex data queries and facilitate flexible data sharing alongside parallel processing through cluster and Grid computing.
  • - The text outlines the advantages of this new approach over traditional methods that rely on simple file storage, and details a specific implementation used to analyze fMRI time series data effectively.
View Article and Find Full Text PDF

Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or "observation-execution matching" system). We asked whether the role that Broca's area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca's area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca's area and other cortical areas because speech-associated gestures are goal-direct actions that are "mirrored").

View Article and Find Full Text PDF

Observing a speaker's mouth profoundly influences speech perception. For example, listeners perceive an "illusory" "ta" when the video of a face producing /ka/ is dubbed onto an audio /pa/. Here, we show how cortical areas supporting speech production mediate this illusory percept and audiovisual (AV) speech perception more generally.

View Article and Find Full Text PDF

Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production.

View Article and Find Full Text PDF