Individual-level resting-state networks (RSNs) based on resting-state fMRI (rs-fMRI) are of great interest due to evidence that network dysfunction may underlie some diseases. Most current rs-fMRI analyses use linear correlation. Since correlation is a bivariate measure of association, it discards most of the information contained in the spatial variation of the thousands of hemodynamic signals within the voxels in a given brain region.
View Article and Find Full Text PDFA cycle in a brain network is a subset of a connected component with redundant additional connections. If there are many cycles in a connected component, the connected component is more densely connected. Whereas the number of connected components represents the integration of the brain network, the number of cycles represents how strong the integration is.
View Article and Find Full Text PDFIn recent years, driven by scientific and clinical concerns, there has been an increased interest in the analysis of functional brain networks. The goal of these analyses is to better understand how brain regions interact, how this depends upon experimental conditions and behavioral measures and how anomalies (disease) can be recognized. In this paper, we provide, first, a brief review of some of the main existing methods of functional brain network analysis.
View Article and Find Full Text PDFMany existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers.
View Article and Find Full Text PDFThere is intense interest in fMRI research on whole-brain functional connectivity, and however, two fundamental issues are still unresolved: the impact of spatiotemporal data resolution (spatial parcellation and temporal sampling) and the impact of the network construction method on the reliability of functional brain networks. In particular, the impact of spatiotemporal data resolution on the resulting connectivity findings has not been sufficiently investigated. In fact, a number of studies have already observed that functional networks often give different conclusions across different parcellation scales.
View Article and Find Full Text PDFThe recent interest in the dynamics of networks and the advent, across a range of applications, of measuring modalities that operate on different temporal scales have put the spotlight on some significant gaps in the theory of multivariate time series. Fundamental to the description of network dynamics is the direction of interaction between nodes, accompanied by a measure of the strength of such interactions. Granger causality and its associated frequency domain strength measures (GEMs) (due to Geweke) provide a framework for the formulation and analysis of these issues.
View Article and Find Full Text PDFIn this paper, we describe a new method for solving the magnetoencephalography inverse problem: temporal vector ℓ0-penalized least squares (TV-L0LS). The method calculates maximally sparse current dipole magnitudes and directions via spatial ℓ0 regularization on a cortically-distributed source grid, while constraining the solution to be smooth with respect to time. We demonstrate the utility of this method on real and simulated data by comparison to existing methods.
View Article and Find Full Text PDFWe develop a new approach to functional brain connectivity analysis, which deals with four fundamental aspects of connectivity not previously jointly treated. These are: temporal correlation, spurious spatial correlation, sparsity, and network construction using trajectory (as opposed to marginal) Mutual Information. We call the new method Sparse Conditional Trajectory Mutual Information (SCoTMI).
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2013
In a number of application areas such as neural coding there is interest in computing, from real data, the information flows between stochastic processes one of which is a point process. Of particular interest is the calculation of the trajectory (as opposed to marginal) mutual information between an observed point process which is influenced by an underlying but unobserved analog stochastic process i.e.
View Article and Find Full Text PDFThere has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis.
View Article and Find Full Text PDFThe standard modeling framework in functional magnetic resonance imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialized software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis.
View Article and Find Full Text PDFA persistent problem in developing plausible neurophysiological models of perception, cognition, and action is the difficulty of characterizing the interactions between different neural systems. Previous studies have approached this problem by estimating causal influences across brain areas activated during cognitive processing using structural equation modeling (SEM) and, more recently, with Granger-Geweke causality. While SEM is complicated by the need for a priori directional connectivity information, the temporal resolution of dynamic Granger-Geweke estimates is limited because the underlying autoregressive (AR) models assume stationarity over the period of analysis.
View Article and Find Full Text PDFGradient-based optical flow estimation methods typically do not take into account errors in the spatial derivative estimates. The presence of these errors causes an errors-in-variables (EIV) problem. Moreover, the use of finite difference methods to calculate these derivatives ensures that the errors are strongly correlated between pixels.
View Article and Find Full Text PDFConf Proc IEEE Eng Med Biol Soc
September 2007
Developing optimal strategies for constructing and testing decoding algorithms is an important question in computational neuroscience, In this field, decoding algorithms are mathematical methods that model ensemble neural spiking activity as they dynamically represent a biological signal. We present a recursive decoding algorithm based on a Bayesian point process model of individual neuron spiking activity and a linear stochastic state-space model of the biological signal. We assess the accuracy of the algorithm by computing, along with the decoding error, the true coverage probability of the approximate 0.
View Article and Find Full Text PDFAn important issue in functional MRI analysis is accurate characterisation of the noise processes present in the data. Whilst conventional fMRI noise representations often assume stationarity (or time-invariance) in the noise generating sources, such approaches may serve to suppress important dynamic information about brain function. As an alternative to these fixed temporal assumptions, we present in this paper two time-varying procedures for examining nonstationary noise structure in fMRI data.
View Article and Find Full Text PDFCharacterizing the spatiotemporal behavior of the BOLD signal in functional Magnetic Resonance Imaging (fMRI) is a central issue in understanding brain function. While the nature of functional activation clusters is fundamentally heterogeneous, many current analysis approaches use spatially invariant models that can degrade anatomic boundaries and distort the underlying spatiotemporal signal. Furthermore, few analysis approaches use true spatiotemporal continuity in their statistical formulations.
View Article and Find Full Text PDFNeural receptive fields are dynamic in that with experience, neurons change their spiking responses to relevant stimuli. To understand how neural systems adapt their representations of biological information, analyses of receptive field plasticity from experimental measurements are crucial. Adaptive signal processing, the well-established engineering discipline for characterizing the temporal evolution of system parameters, suggests a framework for studying the plasticity of receptive fields.
View Article and Find Full Text PDFNeural spike train decoding algorithms and techniques to compute Shannon mutual information are important methods for analyzing how neural systems represent biological signals. Decoding algorithms are also one of several strategies being used to design controls for brain-machine interfaces. Developing optimal strategies to design decoding algorithms and compute mutual information are therefore important problems in computational neuroscience.
View Article and Find Full Text PDFArtifacts generated by motion (e.g., ballistocardiac) of the head inside a high magnetic field corrupt recordings of EEG and EPs.
View Article and Find Full Text PDFNeural receptive fields are frequently plastic: a neural response to a stimulus can change over time as a result of experience. We developed an adaptive point process filtering algorithm that allowed us to estimate the dynamics of both the spatial receptive field (spatial intensity function) and the interspike interval structure (temporal intensity function) of neural spike trains on a millisecond time scale without binning over time or space. We applied this algorithm to both simulated data and recordings of putative excitatory neurons from the CA1 region of the hippocampus and the deep layers of the entorhinal cortex (EC) of awake, behaving rats.
View Article and Find Full Text PDFProc Natl Acad Sci U S A
October 2001
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale.
View Article and Find Full Text PDFIn this work we treat fMRI data analysis as a spatiotemporal system identification problem and address issues of model formulation, estimation, and model comparison. We present a new model that includes a physiologically based hemodynamic response and an empirically derived low-frequency noise model. We introduce an estimation method employing spatial regularization that improves the precision of spatially varying noise estimates.
View Article and Find Full Text PDFIn the last half decade, fast methods of magnetic resonance imaging have led to the possibility, for the first time, of non-invasive dynamic brain imaging. This has led to an explosion of work in the Neurosciences. From a signal processing viewpoint the problems are those of nonlinear spatio-temporal system identification.
View Article and Find Full Text PDF