Publications by authors named "Edmund Lalor"

Article Synopsis
  • Individuals with schizophrenia spectrum disorders (SSD) struggle with processing social information and have issues with "theory of mind" (ToM), which is essential for understanding others' mental states.
  • A study using fMRI while participants watched The Office revealed that SSD individuals show less neural response in the medial prefrontal cortex during socially awkward moments, indicating a disruption in the ToM network.
  • The findings suggest that this reduced activation and connectivity in the ToM network correlate with psychotic experiences and social dysfunction, implying that SSD individuals may have a diminished capacity for social understanding during real-life interactions.
View Article and Find Full Text PDF
Article Synopsis
  • The human brain transforms continuous speech into words by interpreting various factors like intonation and accents, and this process can be modeled using EEG recordings.
  • Contemporary models tend to overlook how sounds are categorized in the brain, limiting our understanding of speech processing.
  • The study finds that deep-learning systems like Whisper improve EEG modeling of speech comprehension by incorporating context and demonstrating that linguistic structure is crucial for accurate brain function representation, especially in complex listening environments.
View Article and Find Full Text PDF

Seeing the speaker's face greatly improves our speech comprehension in noisy environments. This is due to the brain's ability to combine the auditory and the visual information around us, a process known as multisensory integration. Selective attention also strongly influences what we comprehend in scenarios with multiple speakers-an effect known as the cocktail-party phenomenon.

View Article and Find Full Text PDF

There is considerable debate over how visual speech is processed in the absence of sound and whether neural activity supporting lipreading occurs in visual brain areas. Much of the ambiguity stems from a lack of behavioral grounding and neurophysiological analyses that cannot disentangle high-level linguistic and phonetic/energetic contributions from visual speech. To address this, we recorded EEG from human observers as they watched silent videos, half of which were novel and half of which were previously rehearsed with the accompanying audio.

View Article and Find Full Text PDF

Neurophysiology research has demonstrated that it is possible and valuable to investigate sensory processing in scenarios involving continuous sensory streams, such as speech and music. Over the past 10 years or so, novel analytic frameworks combined with the growing participation in data sharing has led to a surge of publicly available datasets involving continuous sensory experiments. However, open science efforts in this domain of research remain scattered, lacking a cohesive set of guidelines.

View Article and Find Full Text PDF

Seeing the speaker's face greatly improves our speech comprehension in noisy environments. This is due to the brain's ability to combine the auditory and the visual information around us, a process known as multisensory integration. Selective attention also strongly influences what we comprehend in scenarios with multiple speakers - an effect known as the cocktail-party phenomenon.

View Article and Find Full Text PDF

In noisy environments, our ability to understand speech benefits greatly from seeing the speaker's face. This is attributed to the brain's ability to integrate audio and visual information, a process known as multisensory integration. In addition, selective attention plays an enormous role in what we understand, the so-called cocktail-party phenomenon.

View Article and Find Full Text PDF

The goal of describing how the human brain responds to complex acoustic stimuli has driven auditory neuroscience research for decades. Often, a systems-based approach has been taken, in which neurophysiological responses are modeled based on features of the presented stimulus. This includes a wealth of work modeling electroencephalogram (EEG) responses to complex acoustic stimuli such as speech.

View Article and Find Full Text PDF
Article Synopsis
  • The human brain's response to complex sounds, particularly speech, has been a significant focus in auditory neuroscience, often using a systems-based approach to model neurophysiological responses.
  • Traditional models primarily rely on raw acoustic features like amplitude and spectrogram, but they don't account for how these sounds are processed and transformed in lower-order auditory areas before reaching the cortex.
  • Research findings suggest that using responses from the inferior colliculus (IC) — which more closely resemble the inputs to the cortex — leads to more accurate predictions of EEG activity compared to traditional acoustic-feature models, and integrating both can enhance predictive accuracy even further.
View Article and Find Full Text PDF

In recent years research on natural speech processing has benefited from recognizing that low-frequency cortical activity tracks the amplitude envelope of natural speech. However, it remains unclear to what extent this tracking reflects speech-specific processing beyond the analysis of the stimulus acoustics. In the present study, we aimed to disentangle contributions to cortical envelope tracking that reflect general acoustic processing from those that are functionally related to processing speech.

View Article and Find Full Text PDF

Speech comprehension relies on the ability to understand words within a coherent context. Recent studies have attempted to obtain electrophysiological indices of this process by modelling how brain activity is affected by a word's semantic dissimilarity to preceding words. Although the resulting indices appear robust and are strongly modulated by attention, it remains possible that, rather than capturing the contextual understanding of words, they may actually reflect word-to-word changes in semantic content without the need for a narrative-level understanding on the part of the listener.

View Article and Find Full Text PDF

Humans have the remarkable ability to selectively focus on a single talker in the midst of other competing talkers. The neural mechanisms that underlie this phenomenon remain incompletely understood. In particular, there has been longstanding debate over whether attention operates at an early or late stage in the speech processing hierarchy.

View Article and Find Full Text PDF

Cognitive neuroscience, in particular research on speech and language, has seen an increase in the use of linear modeling techniques for studying the processing of natural, environmental stimuli. The availability of such computational tools has prompted similar investigations in many clinical domains, facilitating the study of cognitive and sensory deficits under more naturalistic conditions. However, studying clinical (and often highly heterogeneous) cohorts introduces an added layer of complexity to such modeling procedures, potentially leading to instability of such techniques and, as a result, inconsistent findings.

View Article and Find Full Text PDF

The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening.

View Article and Find Full Text PDF

Seeing a speaker's face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker's face provides temporal cues to auditory cortex, and articulatory information from the speaker's mouth can aid recognizing specific linguistic units (e.g.

View Article and Find Full Text PDF
Article Synopsis
  • Understanding sentence-level meaning in the brain is a complex challenge, and recent research uses vector models to investigate brain activation patterns elicited by sentences.
  • This study focuses on how a deep learning model called InferSent, which creates unified sentence representations, outperforms traditional "bag-of-words" models that ignore sentence structure.
  • The findings suggest that semantic processing happens across multiple brain regions, indicating that there's not a single location for understanding sentence meanings, but rather a distributed network that integrates various components.
View Article and Find Full Text PDF

Healthy ageing leads to changes in the brain that impact upon sensory and cognitive processing. It is not fully clear how these changes affect the processing of everyday spoken language. Prediction is thought to play an important role in language comprehension, where information about upcoming words is pre-activated across multiple representational levels.

View Article and Find Full Text PDF

The human auditory system is highly skilled at extracting and processing information from speech in both single-speaker and multi-speaker situations. A commonly studied speech feature is the amplitude envelope which can also be used to determine which speaker a listener is attending to in those multi-speaker situations. Non-invasive brain imaging (electro-/magnetoencephalography [EEG/MEG]) has shown that the phase of neural activity below 16 Hz tracks the dynamics of speech, whereas invasive brain imaging (electrocorticography [ECoG]) has shown that such processing is strongly reflected in the power of high frequency neural activity (around 70-150 Hz; known as high gamma).

View Article and Find Full Text PDF

Humans can easily distinguish many sounds in the environment, but speech and music are uniquely important. Previous studies, mostly using fMRI, have identified separate regions of the brain that respond selectively for speech and music. Yet there is little evidence that brain responses are larger and more temporally precise for human-specific sounds like speech and music compared to other types of sounds, as has been found for responses to species-specific sounds in other animals.

View Article and Find Full Text PDF

Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.

View Article and Find Full Text PDF

The brain is thought to combine linguistic knowledge of words and nonlinguistic knowledge of their referents to encode sentence meaning. However, functional neuroimaging studies aiming at decoding language meaning from neural activity have mostly relied on distributional models of word semantics, which are based on patterns of word co-occurrence in text corpora. Here, we present initial evidence that modeling nonlinguistic "experiential" knowledge contributes to decoding neural representations of sentence meaning.

View Article and Find Full Text PDF

Speech perception involves the integration of sensory input with expectations based on the context of that speech. Much debate surrounds the issue of whether or not prior knowledge feeds back to affect early auditory encoding in the lower levels of the speech processing hierarchy, or whether perception can be best explained as a purely feedforward process. Although there has been compelling evidence on both sides of this debate, experiments involving naturalistic speech stimuli to address these questions have been lacking.

View Article and Find Full Text PDF

Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease primarily affecting motor function, with additional evidence of extensive nonmotor involvement. Despite increasing recognition of the disease as a multisystem network disorder characterised by impaired connectivity, the precise neuroelectric characteristics of impaired cortical communication remain to be fully elucidated. Here, we characterise changes in functional connectivity using beamformer source analysis on resting-state electroencephalography recordings from 74 ALS patients and 47 age-matched healthy controls.

View Article and Find Full Text PDF

Speech is central to communication among humans. Meaning is largely conveyed by the selection of linguistic units such as words, phrases and sentences. However, prosody, that is the variation of acoustic cues that tie linguistic segments together, adds another layer of meaning.

View Article and Find Full Text PDF

Characterizing how the brain responds to stimuli has been a goal of sensory neuroscience for decades. One key approach has been to fit linear models to describe the relationship between sensory inputs and neural responses. This has included models aimed at predicting spike trains, local field potentials, BOLD responses, and EEG/MEG.

View Article and Find Full Text PDF