Long-term potentiation (LTP) is a synaptic mechanism involved in learning and memory. Experiments have shown that dendritic sodium spikes (Na-dSpikes) are required for LTP in the distal apical dendrites of CA1 pyramidal cells. On the other hand, LTP in perisomatic dendrites can be induced by synaptic input patterns that can be both subthreshold and suprathreshold for Na-dSpikes.
View Article and Find Full Text PDFModeling long-term neuronal dynamics may require running long-lasting simulations. Such simulations are computationally expensive, and therefore it is advantageous to use simplified models that sufficiently reproduce the real neuronal properties. Reducing the complexity of the neuronal dendritic tree is one option.
View Article and Find Full Text PDFLong-term potentiation (LTP) and long-term depression (LTD) are widely accepted to be synaptic mechanisms involved in learning and memory. It remains uncertain, however, which particular activity rules are utilized by hippocampal neurons to induce LTP and LTD in behaving animals. Recent experiments in the dentate gyrus of freely moving rats revealed an unexpected pattern of LTP and LTD from high-frequency perforant path stimulation.
View Article and Find Full Text PDFThe long-lasting enhancement of synaptic effectiveness known as long-term potentiation (LTP) is considered to be the cellular basis of long-term memory. LTP elicits changes at the cellular and molecular level, including temporally specific alterations in gene networks. LTP can be seen as a biological process in which a transient signal sets a new homeostatic state that is "remembered" by cellular regulatory systems.
View Article and Find Full Text PDFFront Comput Neurosci
February 2015
Computational models of metaplasticity have usually focused on the modeling of single synapses (Shouval et al., 2002). In this paper we study the effect of metaplasticity on network behavior.
View Article and Find Full Text PDFThe posterior-anterior shift in aging (PASA) is a commonly observed phenomenon in functional neuroimaging studies of aging, characterized by age-related reductions in occipital activity alongside increases in frontal activity. In this work we have investigated the hypothesis as to whether the PASA is also manifested in functional brain network measures such as degree, clustering coefficient, path length and local efficiency. We have performed statistical analysis upon functional networks derived from a fMRI dataset containing data from healthy young, healthy aged, and aged individuals with very mild to mild Alzheimer's disease (AD).
View Article and Find Full Text PDFA significant feature of spiking neural networks with varying connection delays, such as those in the brain, is the existence of strongly connected groups of neurons known as polychronous neural groups (PNGs). Polychronous groups are found in large numbers in these networks and are proposed by Izhikevich (2006a) to provide a neural basis for representation and memory. When exposed to a familiar stimulus, spiking neural networks produce consistencies in the spiking output data that are the hallmarks of PNG activation.
View Article and Find Full Text PDFIn this article we present a neural network model of sentence generation. The network has both technical and conceptual innovations. Its main technical novelty is in its semantic representations: the messages which form the input to the network are structured as sequences, so that message elements are delivered to the network one at a time.
View Article and Find Full Text PDFThis paper presents a new modular and integrative sensory information system inspired by the way the brain performs information processing, in particular, pattern recognition. Spiking neural networks are used to model human-like visual and auditory pathways. This bimodal system is trained to perform the specific task of person authentication.
View Article and Find Full Text PDFThe paper introduces a novel computational approach to brain dynamics modeling that integrates dynamic gene-protein regulatory networks with a neural network model. Interaction of genes and proteins in neurons affects the dynamics of the whole neural network. Through tuning the gene-protein interaction network and the initial gene/protein expression values, different states of the neural network dynamics can be achieved.
View Article and Find Full Text PDFHeterosynaptic long-term depression (LTD) is conventionally defined as occurring at synapses that are inactive during a time when neighboring synapses are activated by high-frequency stimulation. A new model that combines computational properties of both the Bienenstock, Cooper and Munro model and spike timing-dependent plasticity, however, suggests that such LTD actually may require presynaptic activity in the depressed pathway. We tested experimentally whether presynaptic activity is in fact necessary for previously described heterosynaptic LTD in lateral perforant path synapses in the dentate gyrus of urethane-anesthetized rats.
View Article and Find Full Text PDFWe have combined the nearest neighbour additive spike-timing-dependent plasticity (STDP) rule with the Bienenstock, Cooper and Munro (BCM) sliding modification threshold in a computational model of heterosynaptic plasticity in the hippocampal dentate gyrus. As a result we can reproduce (1) homosynaptic long-term potentiation of the tetanized input, and (2) heterosynaptic long-term depression of the untetanized input, as observed in real experiments.
View Article and Find Full Text PDFThe paper presents a methodology for using computational neurogenetic modelling (CNGM) to bring new original insights into how genes influence the dynamics of brain neural networks. CNGM is a novel computational approach to brain neural network modelling that integrates dynamic gene networks with artificial neural network model (ANN). Interaction of genes in neurons affects the dynamics of the whole ANN model through neuronal parameters, which are no longer constant but change as a function of gene expression.
View Article and Find Full Text PDFRecurrent neural networks are often employed in the cognitive science community to process symbol sequences that represent various natural language structures. The aim is to study possible neural mechanisms of language processing and aid in development of artificial language processing systems. We used data sets containing recursive linguistic structures and trained the Elman simple recurrent network (SRN) for the next-symbol prediction task.
View Article and Find Full Text PDFIn this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts.
View Article and Find Full Text PDF