Describing statistical dependencies is foundational to empirical scientific research. For uncovering intricate and possibly nonlinear dependencies between a single target variable and several source variables within a system, a principled and versatile framework can be found in the theory of partial information decomposition (PID). Nevertheless, the majority of existing PID measures are restricted to categorical variables, while many systems of interest in science are continuous.
View Article and Find Full Text PDFStudies investigating neural information processing often implicitly ask both, which processing strategy out of several alternatives is used and how this strategy is implemented in neural dynamics. A prime example are studies on predictive coding. These often ask whether confirmed predictions about inputs or prediction errors between internal predictions and inputs are passed on in a hierarchical neural system-while at the same time looking for the neural correlates of coding for errors and predictions.
View Article and Find Full Text PDFNature relies on highly distributed computation for the processing of information in nervous systems across the entire animal kingdom. Such distributed computation can be more easily understood if decomposed into the three elementary components of information processing, i.e.
View Article and Find Full Text PDFAging is accompanied by unisensory decline. To compensate for this, two complementary strategies are potentially relied upon increasingly: first, older adults integrate more information from different sensory organs. Second, according to the predictive coding (PC) model, we form "templates" (internal models or "priors") of the environment through our experiences.
View Article and Find Full Text PDFScan pattern analysis has been discussed as a promising tool in the context of real-time gaze-based applications. In particular, information-theoretic measures of scan path predictability, such as the gaze transition entropy (GTE), have been proposed for detecting relevant changes in user state or task demand. These measures model scan patterns as first-order Markov chains, assuming that only the location of the previous fixation is predictive of the next fixation in time.
View Article and Find Full Text PDFEntropy-based measures are an important tool for studying human gaze behavior under various conditions. In particular, gaze transition entropy (GTE) is a popular method to quantify the predictability of a visual scanpath as the entropy of transitions between fixations and has been shown to correlate with changes in task demand or changes in observer state. Measuring scanpath predictability is thus a promising approach to identifying viewers' cognitive states in behavioral experiments or gaze-based applications.
View Article and Find Full Text PDFInformation transfer, measured by transfer entropy, is a key component of distributed computation. It is therefore important to understand the pattern of information transfer in order to unravel the distributed computational algorithms of a system. Since in many natural systems distributed computation is thought to rely on rhythmic processes a frequency resolved measure of information transfer is highly desirable.
View Article and Find Full Text PDFNetwork inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently deal with high-dimensional datasets while avoiding redundant inferences and capturing synergistic effects.
View Article and Find Full Text PDFPredictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure.
View Article and Find Full Text PDFThe disruption of coupling between brain areas has been suggested as the mechanism underlying loss of consciousness in anesthesia. This hypothesis has been tested previously by measuring the information transfer between brain areas, and by taking reduced information transfer as a proxy for decoupling. Yet, information transfer is a function of the amount of information available in the information source-such that transfer decreases even for unchanged coupling when less source information is available.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
August 2015
In anesthesia research it is an open question how general anesthetics lead to loss of consciousness (LOC). It has been proposed that LOC may be caused by the disruption of cortical information processing, preventing information integration. Therefore, recent studies investigating information processing under anesthesia focused on changes in information transfer, measured by transfer entropy (TE).
View Article and Find Full Text PDFNetwork graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics.
View Article and Find Full Text PDFInformation theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions.
View Article and Find Full Text PDFAutism spectrum disorder (ASD) is a common developmental disorder characterized by communication difficulties and impaired social interaction. Recent results suggest altered brain dynamics as a potential cause of symptoms in ASD. Here, we aim to describe potential information-processing consequences of these alterations by measuring active information storage (AIS)-a key quantity in the theory of distributed computation in biological networks.
View Article and Find Full Text PDFThis paper (1) highlights the relevance of functional communication as an outcome parameter in Alzheimer disease (AD) clinical trials; (2) identifies studies that have reported functional communication outcome measures in AD clinical trials; (3) critically reviews the scales of functional communication used in recent AD clinical trials by summarizing the sources of information, characteristics, and available psychometric data for these scales; and (4) evaluates whether these measures actually or partially assess functional communication. To provide direction for future research and generate suggestions to assist in the development of a valid and reliable functional communication scale for the needs of AD clinical trials, we have included not only functional communication scales, but also related concepts that give thought-provoking impulses for the development of a functional communication scale. As outcome measures for AD clinical trials, the 6 identified papers use 6 different scales, for functional communication and for related concepts.
View Article and Find Full Text PDFTo understand the function of networks we have to identify the structure of their interactions, but also interaction timing, as compromised timing of interactions may disrupt network function. We demonstrate how both questions can be addressed using a modified estimator of transfer entropy. Transfer entropy is an implementation of Wiener's principle of observational causality based on information theory, and detects arbitrary linear and non-linear interactions.
View Article and Find Full Text PDF