Publications by authors named "Jean-Pascal Pfister"

Bayesian Active Learning (BAL) is an efficient framework for learning the parameters of a model, in which input stimuli are selected to maximize the mutual information between the observations and the unknown parameters. However, the applicability of BAL to experiments is limited as it requires performing high-dimensional integrations and optimizations in real time. Current methods are either too time consuming, or only applicable to specific models.

View Article and Find Full Text PDF

Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.

View Article and Find Full Text PDF

Learning, especially rapid learning, is critical for survival. However, learning is hard; a large number of synaptic weights must be set based on noisy, often ambiguous, sensory information. In such a high-noise regime, keeping track of probability distributions over weights is the optimal strategy.

View Article and Find Full Text PDF

Synapses are highly stochastic transmission units. A classical model describing this stochastic transmission is called the binomial model, and its underlying parameters can be estimated from postsynaptic responses to evoked stimuli. The accuracy of parameter estimates obtained via such a model-based approach depends on the identifiability of the model.

View Article and Find Full Text PDF

The distribution of intervals between human actions such as email posts or keyboard strokes demonstrates distinct properties at short versus long timescales. For instance, at long timescales, which are presumably controlled by complex process such as planning and decision making, it has been shown that those interevent intervals follow a scale-invariant (or power-law) distribution. In contrast, at shorter timescales-which are governed by different processes such as sensorimotor skill-they do not follow the same distribution and we know little about how they relate to the scale-invariant pattern.

View Article and Find Full Text PDF

Accumulating evidence indicates that the human brain copes with sensory uncertainty in accordance with Bayes' rule. However, it is unknown how humans make predictions when the generative model of the task at hand is described by uncertain parameters. Here, we tested whether and how humans take parameter uncertainty into account in a regression task.

View Article and Find Full Text PDF

This is a PLOS Computational Biology Education paper. The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Because a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities.

View Article and Find Full Text PDF

Synaptic computation is believed to underlie many forms of animal behavior. A correct identification of synaptic transmission properties is thus crucial for a better understanding of how the brain processes information, stores memories and learns. Recently, a number of new statistical methods for inferring synaptic transmission parameters have been introduced.

View Article and Find Full Text PDF

A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper.

View Article and Find Full Text PDF

The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation.

View Article and Find Full Text PDF

Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled.

View Article and Find Full Text PDF

Neuropathic pain caused by peripheral nerve injury is a debilitating neurological condition of high clinical relevance. On the cellular level, the elevated pain sensitivity is induced by plasticity of neuronal function along the pain pathway. Changes in cortical areas involved in pain processing contribute to the development of neuropathic pain.

View Article and Find Full Text PDF

Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons.

View Article and Find Full Text PDF

Synaptic strength depresses for low and potentiates for high activation of the postsynaptic neuron. This feature is a key property of the Bienenstock-Cooper-Munro (BCM) synaptic learning rule, which has been shown to maximize the selectivity of the postsynaptic neuron, and thereby offers a possible explanation for experience-dependent cortical plasticity such as orientation selectivity. However, the BCM framework is rate-based and a significant amount of recent work has shown that synaptic plasticity also depends on the precise timing of presynaptic and postsynaptic spikes.

View Article and Find Full Text PDF

Spike-frequency adaptation is known to enhance the transmission of information in sensory spiking neurons by rescaling the dynamic range for input processing, matching it to the temporal statistics of the sensory stimulus. Achieving maximal information transmission has also been recently postulated as a role for spike-timing-dependent plasticity (STDP). However, the link between optimal plasticity and STDP in cortex remains loose, as does the relationship between STDP and adaptation processes.

View Article and Find Full Text PDF

The trajectory of the somatic membrane potential of a cortical neuron exactly reflects the computations performed on its afferent inputs. However, the spikes of such a neuron are a very low-dimensional and discrete projection of this continually evolving signal. We explored the possibility that the neuron's efferent synapses perform the critical computational step of estimating the membrane potential trajectory from the spikes.

View Article and Find Full Text PDF

Highly synchronized neural networks can be the source of various pathologies such as Parkinson's disease or essential tremor. Therefore, it is crucial to better understand the dynamics of such networks and the conditions under which a high level of synchronization can be observed. One of the key factors that influences the level of synchronization is the type of learning rule that governs synaptic plasticity.

View Article and Find Full Text PDF

We studied the hypothesis that synaptic dynamics is controlled by three basic principles: (1) synapses adapt their weights so that neurons can effectively transmit information, (2) homeostatic processes stabilize the mean firing rate of the postsynaptic neuron, and (3) weak synapses adapt more slowly than strong ones, while maintenance of strong synapses is costly. Our results show that a synaptic update rule derived from these principles shares features, with spike-timing-dependent plasticity, is sensitive to correlations in the input and is useful for synaptic memory. Moreover, input selectivity (sharply tuned receptive fields) of postsynaptic neurons develops only if stimuli with strong features are presented.

View Article and Find Full Text PDF

Classical experiments on spike timing-dependent plasticity (STDP) use a protocol based on pairs of presynaptic and postsynaptic spikes repeated at a given frequency to induce synaptic potentiation or depression. Therefore, standard STDP models have expressed the weight change as a function of pairs of presynaptic and postsynaptic spike. Unfortunately, those paired-based STDP models cannot account for the dependence on the repetition frequency of the pairs of spike.

View Article and Find Full Text PDF

In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing.

View Article and Find Full Text PDF

Maximization of information transmission by a spiking-neuron model predicts changes of synaptic connections that depend on timing of pre- and postsynaptic spikes and on the postsynaptic membrane potential. Under the assumption of Poisson firing statistics, the synaptic update rule exhibits all of the features of the Bienenstock-Cooper-Munro rule, in particular, regimes of synaptic potentiation and depression separated by a sliding threshold. Moreover, the learning rule is also applicable to the more realistic case of neuron models with refractoriness, and is sensitive to correlations between input spikes, even in the absence of presynaptic rate modulation.

View Article and Find Full Text PDF