Given the rapid advancement of artificial intelligence, understanding the foundations of intelligent behaviour is increasingly important. Active inference, regarded as a general theory of behaviour, offers a principled approach to probing the basis of sophistication in planning and decision-making. This paper examines two decision-making schemes in active inference based on "planning" and "learning from experience".
View Article and Find Full Text PDFThis paper concerns the distributed intelligence or federated inference that emerges under belief-sharing among agents who share a common world-and world model. Imagine, for example, several animals keeping a lookout for predators. Their collective surveillance rests upon being able to communicate their beliefs-about what they see-among themselves.
View Article and Find Full Text PDFEmpirical applications of the free-energy principle are not straightforward because they entail a commitment to a particular process theory, especially at the cellular and synaptic levels. Using a recently established reverse engineering technique, we confirm the quantitative predictions of the free-energy principle using in vitro networks of rat cortical neurons that perform causal inference. Upon receiving electrical stimuli-generated by mixing two hidden sources-neurons self-organised to selectively encode the two sources.
View Article and Find Full Text PDFThis work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function-and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model.
View Article and Find Full Text PDFThe neuronal substrates that implement the free-energy principle and ensuing active inference at the neuron and synapse level have not been fully elucidated. This Review considers possible neuronal substrates underlying the principle. First, the foundations of the free-energy principle are introduced, and then its ability to empirically explain various brain functions and psychological and biological phenomena in terms of Bayesian inference is described.
View Article and Find Full Text PDFAnimals make decisions under the principle of reward value maximization and surprise minimization. It is still unclear how these principles are represented in the brain and are reflected in behavior. We addressed this question using a closed-loop virtual reality system to train adult zebrafish for active avoidance.
View Article and Find Full Text PDFFor many years, a combination of principal component analysis (PCA) and independent component analysis (ICA) has been used for blind source separation (BSS). However, it remains unclear why these linear methods work well with real-world data that involve nonlinear source mixtures. This work theoretically validates that a cascade of linear PCA and ICA can solve a nonlinear BSS problem accurately-when the sensory inputs are generated from hidden sources via nonlinear mappings with sufficient dimensionality.
View Article and Find Full Text PDFRecent remarkable advances in experimental techniques have provided a background for inferring neuronal couplings from point process data that include a great number of neurons. Here, we propose a systematic procedure for pre- and postprocessing generic point process data in an objective manner to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch-Pitts model. The procedure has two steps: (1) determining time bin size for transforming the point process data into discrete-time binary data and (2) screening relevant couplings from the estimated couplings.
View Article and Find Full Text PDFNeural Comput
November 2020
This letter considers a class of biologically plausible cost functions for neural networks, where the same cost function is minimized by both neural activity and plasticity. We show that such cost functions can be cast as a variational bound on model evidence under an implicit generative model. Using generative models based on partially observed Markov decision processes (POMDP), we show that neural activity and plasticity perform Bayesian inference and learning, respectively, by maximizing model evidence.
View Article and Find Full Text PDFTo exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating.
View Article and Find Full Text PDFAnimals need to adjust their inferences according to the context they are in. This is required for the multi-context blind source separation (BSS) task, where an agent needs to infer hidden sources from their context-dependent mixtures. The agent is expected to invert this mixing process for all contexts.
View Article and Find Full Text PDFThis paper considers the emergence of a generalised synchrony in ensembles of coupled self-organising systems, such as neurons. We start from the premise that any self-organising system complies with the free energy principle, in virtue of placing an upper bound on its entropy. Crucially, the free energy principle allows one to interpret biological systems as inferring the state of their environment or external milieu.
View Article and Find Full Text PDFIn this work, we address the neuronal encoding problem from a Bayesian perspective. Specifically, we ask whether neuronal responses in an in vitro neuronal network are consistent with ideal Bayesian observer responses under the free energy principle. In brief, we stimulated an in vitro cortical cell culture with stimulus trains that had a known statistical structure.
View Article and Find Full Text PDFFront Comput Neurosci
October 2018
Humans have flexible control over cognitive functions depending on the context. Several studies suggest that the prefrontal cortex (PFC) controls this cognitive flexibility, but the detailed underlying mechanisms remain unclear. Recent developments in machine learning techniques allow simple PFC models written as a recurrent neural network to perform various behavioral tasks like humans and animals.
View Article and Find Full Text PDFThe mutual information between the state of a neural network and the state of the external world represents the amount of information stored in the neural network that is associated with the external world. In contrast, the surprise of the sensory input indicates the unpredictability of the current input. In other words, this is a measure of inference ability, and an upper bound of the surprise is known as the variational free energy.
View Article and Find Full Text PDFWe developed a biologically plausible unsupervised learning algorithm, error-gated Hebbian rule (EGHR)-β, that performs principal component analysis (PCA) and independent component analysis (ICA) in a single-layer feedforward neural network. If parameter β = 1, it can extract the subspace that major principal components span similarly to Oja's subspace rule for PCA. If β = 0, it can separate independent sources similarly to Bell-Sejnowski's ICA rule but without requiring the same number of input and output neurons.
View Article and Find Full Text PDFBackground: Synchrony within neuronal networks is thought to be a fundamental feature of neuronal networks. In order to quantify synchrony between spike trains, various synchrony measures were developed. Most of them are time scale dependent and thus require the setting of an appropriate time scale.
View Article and Find Full Text PDFCurr Opin Neurobiol
October 2017
Synaptic plasticity is a central theme in neuroscience. A framework of three-factor learning rules provides a powerful abstraction, helping to navigate through the abundance of models of synaptic plasticity. It is well-known that the dopamine modulation of learning is related to reward, but theoretical models predict other functional roles of the modulatory third factor; it may encode errors for supervised learning, summary statistics of the population activity for unsupervised learning or attentional feedback.
View Article and Find Full Text PDFSynapse elimination and neurite pruning are essential processes for the formation of neuronal circuits. These regressive events depend on neural activity and occur in the early postnatal days known as the critical period, but what makes this temporal specificity is not well understood. One possibility is that the neural activities during the developmentally regulated shift of action of GABA inhibitory transmission lead to the critical period.
View Article and Find Full Text PDFObjective: Adult neurogenesis in the hippocampus facilitates cognitive functions such as pattern separation in mammals. However, it remains unclear how newborn neurons mediate changes in neural networks to enhance pattern separation ability. Here, we developed an in vitro model of adult neurogenesis using rat hippocampal cultures in order to investigate whether newborn neurons can be directly incorporated into neural networks related to pattern separation to produce functional improvements.
View Article and Find Full Text PDFThe free-energy principle is a candidate unified theory for learning and memory in the brain that predicts that neurons, synapses, and neuromodulators work in a manner that minimizes free energy. However, electrophysiological data elucidating the neural and synaptic bases for this theory are lacking. Here, we propose a novel theory bridging the information-theoretical principle with the biological phenomenon of spike-timing dependent plasticity (STDP) regulated by neuromodulators, which we term mSTDP.
View Article and Find Full Text PDFHumans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals.
View Article and Find Full Text PDFBlind source separation is the computation underlying the cocktail party effect--a partygoer can distinguish a particular talker's voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals.
View Article and Find Full Text PDFObjective: Simplified neuronal circuits are required for investigating information representation in nervous systems and for validating theoretical neural network models. Here, we developed patterned neuronal circuits using micro fabricated devices, comprising a micro-well array bonded to a microelectrode-array substrate.
Approach: The micro-well array consisted of micrometre-scale wells connected by tunnels, all contained within a silicone slab called a micro-chamber.
Connection strength estimation is widely used in detecting the topology of neuronal networks and assessing their synaptic plasticity. A recently proposed model-based method using the leaky integrate-and-fire model neuron estimates membrane potential from spike trains by calculating the maximum a posteriori (MAP) path. We further enhance the MAP path method using variational Bayes and dynamic causal modeling.
View Article and Find Full Text PDF