Publications by authors named "Patrick J C May"

We demonstrate how the structure of auditory cortex can be investigated by combining computational modelling with advanced optimisation methods. We optimise a well-established auditory cortex model by means of an evolutionary algorithm. The model describes auditory cortex in terms of multiple core, belt, and parabelt fields.

View Article and Find Full Text PDF

Lifetime experiences and lifestyle, such as education and engaging in leisure activities, contribute to cognitive reserve (CR), which delays the onset of age-related cognitive decline. Word-finding difficulties have been identified as the most prominent cognitive problem in older age. Whether CR mitigates age-related word-finding difficulties is currently unknown.

View Article and Find Full Text PDF

The World Health Organization (WHO) aims to improve our understanding of the factors that promote healthy cognitive aging and combat dementia. Aging theories that consider individual aging trajectories are of paramount importance to meet the WHO's aim. Both the revised Scaffolding Theory of Aging and Cognition (STAC-r) and Cognitive Reserve theory (CR) offer theoretical frameworks for the mechanisms of cognitive aging and the positive influence of an engaged lifestyle.

View Article and Find Full Text PDF

Adaptation, the reduction of neuronal responses by repetitive stimulation, is a ubiquitous feature of auditory cortex (AC). It is not clear what causes adaptation, but short-term synaptic depression (STSD) is a potential candidate for the underlying mechanism. In such a case, adaptation can be directly linked with the way AC produces context-sensitive responses such as mismatch negativity and stimulus-specific adaptation observed on the single-unit level.

View Article and Find Full Text PDF

An unpredictable stimulus elicits a stronger event-related response than a high-probability stimulus. This differential in response magnitude is termed the mismatch negativity (MMN). Over the past decade, it has become increasingly popular to explain the MMN terms of predictive coding, a proposed general principle for the way the brain realizes Bayesian inference when it interprets sensory information.

View Article and Find Full Text PDF

Auditory event-related fields (ERFs) measured with magnetoencephalography (MEG) are useful for studying the neuronal underpinnings of auditory cognition in human cortex. They have a highly subject-specific morphology, albeit certain characteristic deflections (e.g.

View Article and Find Full Text PDF

Event-related fields of the magnetoencephalogram are triggered by sensory stimuli and appear as a series of waves extending hundreds of milliseconds after stimulus onset. They reflect the processing of the stimulus in cortex and have a highly subject-specific morphology. However, we still have an incomplete picture of how event-related fields are generated, what the various waves signify, and why they are so subject-specific.

View Article and Find Full Text PDF

Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable.

View Article and Find Full Text PDF

Introduction: We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically.

View Article and Find Full Text PDF

Spectro-temporal receptive fields (STRFs) are thought to provide descriptive images of the computations performed by neurons along the auditory pathway. However, their validity can be questioned because they rely on a set of assumptions that are probably not fulfilled by real neurons exhibiting contextual effects, that is, nonlinear interactions in the time or frequency dimension that cannot be described with a linear filter. We used a novel approach to investigate how a variety of contextual effects, due to facilitating nonlinear interactions and synaptic depression, affect different STRF models, and if these effects can be captured with a context field (CF).

View Article and Find Full Text PDF

Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum.

View Article and Find Full Text PDF

Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently.

View Article and Find Full Text PDF

Recent studies have shown that acoustically distorted sentences can be perceived as either unintelligible or intelligible depending on whether one has previously been exposed to the undistorted, intelligible versions of the sentences. This allows studying processes specifically related to speech intelligibility since any change between the responses to the distorted stimuli before and after the presentation of their undistorted counterparts cannot be attributed to acoustic variability but, rather, to the successful mapping of sensory information onto memory representations. To estimate how the complexity of the message is reflected in speech comprehension, we applied this rapid change in perception to behavioral and magnetoencephalography (MEG) experiments using vowels, words and sentences.

View Article and Find Full Text PDF

Incoming sounds are represented in the context of preceding events, and this requires a memory mechanism that integrates information over time. Here, it was demonstrated that response adaptation, the suppression of neural responses due to stimulus repetition, might reflect a computational solution that auditory cortex uses for temporal integration. Adaptation is observed in single-unit measurements as two-tone forward masking effects and as stimulus-specific adaptation (SSA).

View Article and Find Full Text PDF

The ability to represent and recognize naturally occuring sounds such as speech depends not only on spectral analysis carried out by the subcortical auditory system but also on the ability of the cortex to bind spectral information over time. In primates, these temporal binding processes are mirrored as selective responsiveness of neurons to species-specific vocalizations. Here, we used computational modeling of auditory cortex to investigate how selectivity to spectrally and temporally complex stimuli is achieved.

View Article and Find Full Text PDF

Background: The robustness of speech perception in the face of acoustic variation is founded on the ability of the auditory system to integrate the acoustic features of speech and to segregate them from background noise. This auditory scene analysis process is facilitated by top-down mechanisms, such as recognition memory for speech content. However, the cortical processes underlying these facilitatory mechanisms remain unclear.

View Article and Find Full Text PDF

The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space.

View Article and Find Full Text PDF

Human speech perception is highly resilient to acoustic distortions. In addition to distortions from external sound sources, degradation of the acoustic structure of the sound itself can substantially reduce the intelligibility of speech. The degradation of the internal structure of speech happens, for example, when the digital representation of the signal is impoverished by reducing its amplitude resolution.

View Article and Find Full Text PDF

Most speech sounds are periodic due to the vibration of the vocal folds. Non-invasive studies of the human brain have revealed a periodicity-sensitive population in the auditory cortex which might contribute to the encoding of speech periodicity. Since the periodicity of natural speech varies from (almost) periodic to aperiodic, one may argue that speech aperiodicity could similarly be represented by a dedicated neuron population.

View Article and Find Full Text PDF

The cortical mechanisms underlying human speech perception in acoustically adverse conditions remain largely unknown. Besides distortions from external sources, degradation of the acoustic structure of the sound itself poses further demands on perceptual mechanisms. We conducted a magnetoencephalography (MEG) study to reveal whether the perceptual differences between these distortions are reflected in cortically generated auditory evoked fields (AEFs).

View Article and Find Full Text PDF

Cortical sensitivity to the periodicity of speech sounds has been evidenced by larger, more anterior responses to periodic than to aperiodic vowels in several non-invasive studies of the human brain. The current study investigated the temporal integration underlying the cortical sensitivity to speech periodicity by studying the increase in periodicity-specific cortical activation with growing stimulus duration. Periodicity-specific activation was estimated from magnetoencephalography as the differences between the N1m responses elicited by periodic and aperiodic vowel stimuli.

View Article and Find Full Text PDF

Objective: To investigate the effects of cortical ischemic stroke and aphasic symptoms on auditory processing abilities in humans as indicated by the transient brain response, a recently documented cortical deflection which has been shown to accurately predict behavioral sound detection.

Methods: Using speech and sinusoidal stimuli in the active (attend) and the passive (ignore) recording condition, cortical activity of ten aphasic stroke patients and ten control subjects was recorded with whole-head MEG and behavioral measurements.

Results: Stroke patients exhibited significantly diminished neuromagnetic transient responses for both sinusoidal and speech stimulation when compared to the control subjects.

View Article and Find Full Text PDF

Objective: The aim of the study was to investigate the effects of aging on human cortical auditory processing of rising-intensity sinusoids and speech sounds. We also aimed to evaluate the suitability of a recently discovered transient brain response for applied research.

Methods: In young and aged adults, magnetic fields produced by cortical activity elicited by a 570-Hz pure-tone and a speech sound (Finnish vowel /a/) were measured using MEG.

View Article and Find Full Text PDF

Background: Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform.

View Article and Find Full Text PDF

A magnetoencephalography study was conducted to reveal the neural code of interaural time difference (ITD) in the human cortex. Widely used crosscorrelator models predict that the code consists of narrow receptive fields distributed to all ITDs. The present findings are, however, more in line with a neural code formed by two opponent neural populations: one tuned to the left and the other to the right hemifield.

View Article and Find Full Text PDF