A common pitfall of current reinforcement learning agents implemented in computational models is in their inadaptability postoptimization. Najarro and Risi [Najarro E, Risi S. . 2020: 20719-20731, 2020] demonstrate how such adaptability may be salvaged in artificial feed-forward networks by optimizing coefficients of classic Hebbian rules to dynamically control the networks' weights instead of optimizing the weights directly. Although such models fail to capture many important neurophysiological details, allying the fields of neuroscience and artificial intelligence in this way bears many fruits for both fields, especially when computational models engage with topics with a rich history in neuroscience such as Hebbian plasticity.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1152/jn.00712.2020 | DOI Listing |
Neural Comput
January 2025
GNFM-INdAM, Gruppo Nazionale di Fisica Matematica, Istituto Nazionale di Alta Matematica, 00185 Rome, Italy.
Neural Netw
October 2024
Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland. Electronic address:
This article investigates the application of spiking neural networks (SNNs) to the problem of topic modeling (TM): the identification of significant groups of words that represent human-understandable topics in large sets of documents. Our research is based on the hypothesis that an SNN that implements the Hebbian learning paradigm is capable of becoming specialized in the detection of statistically significant word patterns in the presence of adequately tailored sequential input. To support this hypothesis, we propose a novel spiking topic model (STM) that transforms text into a sequence of spikes and uses that sequence to train single-layer SNNs.
View Article and Find Full Text PDFJ Neurosci
April 2024
Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91190 Gif/Yvette, France.
Networks are a useful mathematical tool for capturing the complexity of the world. In a previous behavioral study, we showed that human adults were sensitive to the high-level network structure underlying auditory sequences, even when presented with incomplete information. Their performance was best explained by a mathematical model compatible with associative learning principles, based on the integration of the transition probabilities between adjacent and nonadjacent elements with a memory decay.
View Article and Find Full Text PDFPLoS Comput Biol
February 2024
École Polytechnique Fédérale de Lausanne, EPFL, Lusanne, Switzerland.
Cortical populations of neurons develop sparse representations adapted to the statistics of the environment. To learn efficient population codes, synaptic plasticity mechanisms must differentiate relevant latent features from spurious input correlations, which are omnipresent in cortical networks. Here, we develop a theory for sparse coding and synaptic plasticity that is invariant to second-order correlations in the input.
View Article and Find Full Text PDFClassical conditioning states that the systematic co-occurrence of a neutral stimulus with an unconditioned stimulus can cause the neutral stimulus to, over time, evoke the same response as the unconditioned stimulus. On a neural level, Hebbian learning suggests that this type of learning occurs through changes in synaptic plasticity when two neurons are simultaneously active, resulting in increased connectivity between them. Inspired by associative learning theories, we here investigated whether the mere co-activation of visual stimuli and stimulation of the primary motor cortex using TMS would result in stimulus-response associations that can impact future behavior.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!