Publications by authors named "Michalareas G"

EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research.

View Article and Find Full Text PDF

The neural mechanisms that unfold when humans form a large group defined by an overarching context, such as audiences in theater or sports, are largely unknown and unexplored. This is mainly due to the lack of availability of a scalable system that can record the brain activity from a significantly large portion of such an audience simultaneously. Although the technology for such a system has been readily available for a long time, the high cost as well as the large overhead in human resources and logistic planning have prohibited the development of such a system.

View Article and Find Full Text PDF

Speech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level acoustic processing for speech segmentation. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the networks involved, is debated.

View Article and Find Full Text PDF

Grapheme-colour synaesthetes experience an anomalous form of perception in which graphemes systematically induce specific colour concurrents in their mind's eye ("associator" type). Although grapheme-colour synaesthesia has been well characterised behaviourally, its neural mechanisms remain largely unresolved. There are currently several competing models, which can primarily be distinguished according to the anatomical and temporal predictions of synaesthesia-inducing neural activity.

View Article and Find Full Text PDF

Predictive models in the brain rely on the continuous extraction of regularities from the environment. These models are thought to be updated by novel information, as reflected in prediction error responses such as the mismatch negativity (MMN). However, although in real life individuals often face situations in which uncertainty prevails, it remains unclear whether and how predictive models emerge in high-uncertainty contexts.

View Article and Find Full Text PDF

Ample evidence shows that the human brain carefully tracks acoustic temporal regularities in the input, perhaps by entraining cortical neural oscillations to the rate of the stimulation. To what extent the entrained oscillatory activity influences processing of upcoming auditory events remains debated. Here, we revisit a critical finding from Hickok et al.

View Article and Find Full Text PDF

The environment is shaped by two sources of temporal uncertainty: the discrete probability of whether an event will occur and-if it does-the continuous probability of when it will happen. These two types of uncertainty are fundamental to every form of anticipatory behavior including learning, decision-making, and motor planning. It remains unknown how the brain models the two uncertainty parameters and how they interact in anticipation.

View Article and Find Full Text PDF

When we feel connected or engaged during social behavior, are our brains in fact "in sync" in a formal, quantifiable sense? Most studies addressing this question use highly controlled tasks with homogenous subject pools. In an effort to take a more naturalistic approach, we collaborated with art institutions to crowdsource neuroscience data: Over the course of 5 years, we collected electroencephalogram (EEG) data from thousands of museum and festival visitors who volunteered to engage in a 10-min face-to-face interaction. Pairs of participants with various levels of familiarity sat inside the Mutual Wave Machine-an artistic neurofeedback installation that translates real-time correlations of each pair's EEG activity into light patterns.

View Article and Find Full Text PDF

Humans anticipate events signaled by sensory cues. It is commonly assumed that two uncertainty parameters modulate the brain's capacity to predict: the hazard rate (HR) of event probability and the uncertainty in time estimation which increases with elapsed time. We investigate both assumptions by presenting event probability density functions (PDFs) in each of three sensory modalities.

View Article and Find Full Text PDF

The way the human brain represents speech in memory is still unknown. An obvious characteristic of speech is its evolvement over time. During speech processing, neural oscillations are modulated by the temporal properties of the acoustic speech signal, but also acquired knowledge on the temporal structure of language influences speech perception-related brain activity.

View Article and Find Full Text PDF

The human brain has evolved for group living [1]. Yet we know so little about how it supports dynamic group interactions that the study of real-world social exchanges has been dubbed the "dark matter of social neuroscience" [2]. Recently, various studies have begun to approach this question by comparing brain responses of multiple individuals during a variety of (semi-naturalistic) tasks [3-15].

View Article and Find Full Text PDF

Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and we correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography.

View Article and Find Full Text PDF

The Human Connectome Project (HCP) seeks to map the structural and functional connections between network elements in the human brain. Magnetoencephalography (MEG) provides a temporally rich source of information on brain network dynamics and represents one source of functional connectivity data to be provided by the HCP. High quality MEG data will be collected from 50 twin pairs both in the resting state and during performance of motor, working memory and language tasks.

View Article and Find Full Text PDF

The Human Connectome Project (HCP) is an ambitious 5-year effort to characterize brain connectivity and function and their variability in healthy adults. This review summarizes the data acquisition plans being implemented by a consortium of HCP investigators who will study a population of 1200 subjects (twins and their non-twin siblings) using multiple imaging modalities along with extensive behavioral and genetic data. The imaging modalities will include diffusion imaging (dMRI), resting-state fMRI (R-fMRI), task-evoked fMRI (T-fMRI), T1- and T2-weighted MRI for structural and myelin mapping, plus combined magnetoencephalography and electroencephalography (MEG/EEG).

View Article and Find Full Text PDF

In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known.

View Article and Find Full Text PDF