In spiking neural P (SN P) systems, neurons are interconnected by means of synapses, and they use spikes to communicate with each other. However, in biology, the complex structure of dendritic tree is also an important part in the communication scheme between neurons since these structures are linked to advanced neural process such as learning and memory formation. In this work, we present a new variant of the SN P systems inspired by diverse dendrite and axon phenomena such as dendritic feedback, dendritic trunk, dendritic delays and axonal delays, respectively. This new variant is referred to as a spiking neural P system with dendritic and axonal computation (DACSN P system). Specifically, we include experimentally proven biological features in the current SN P systems to reduce the computational complexity of the soma by providing it with stable firing patterns through dendritic delays, dendritic feedback and axonal delays. As a consequence, the proposed DACSN P systems use the minimum number of synapses and neurons with simple and homogeneous standard spiking rules. Here, we study the computational capabilities of a DACSN P system. In particular, we prove that DACSN P systems with dendritic and axonal behavior are universal as both number-accepting/generating devices. In addition, we constructed a small universal SN P system using 39 neurons with standard spiking rules to compute any Turing computable function.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2021.02.010 | DOI Listing |
Adaptive behavior depends on the ability to predict specific events, particularly those related to rewards. Armed with such associative information, we can infer the current value of predicted rewards based on changing circumstances and desires. To support this ability, neural systems must represent both the value and identity of predicted rewards, and these representations must be updated when they change.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Computer Science, College of Computer and Information Sciences, Majmaah University, 11952, Al-Majmaah, Saudi Arabia.
The rapid expansion of IoT networks, combined with the flexibility of Software-Defined Networking (SDN), has significantly increased the complexity of traffic management, requiring accurate classification to ensure optimal quality of service (QoS). Existing traffic classification techniques often rely on manual feature selection, limiting adaptability and efficiency in dynamic environments. This paper presents a novel traffic classification framework for SDN-based IoT networks, introducing a Two-Level Fused Network integrated with a self-adaptive Manta Ray Foraging Optimization (SMRFO) algorithm.
View Article and Find Full Text PDFSci Rep
January 2025
Information and Communication Engineering, Yeungnam University, Gyeongsan, 38541, Republic of Korea.
Model optimization is a problem of great concern and challenge for developing an image classification model. In image classification, selecting the appropriate hyperparameters can substantially boost the model's ability to learn intricate patterns and features from complex image data. Hyperparameter optimization helps to prevent overfitting by finding the right balance between complexity and generalization of a model.
View Article and Find Full Text PDFFront Comput Neurosci
December 2024
Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom.
We present a Spiking Neural Network (SNN) model that incorporates learnable synaptic delays through two approaches: per-synapse delay learning via Dilated Convolutions with Learnable Spacings (DCLS) and a dynamic pruning strategy that also serves as a form of delay learning. In the latter approach, the network dynamically selects and prunes connections, optimizing the delays in sparse connectivity settings. We evaluate both approaches on the Raw Heidelberg Digits keyword spotting benchmark using Backpropagation Through Time with surrogate gradients.
View Article and Find Full Text PDFNatl Sci Rev
January 2025
School of Astronautics, Beihang University, Beijing 100191, China.
The pursuit of artificial neural networks that mirror the accuracy, efficiency and low latency of biological neural networks remains a cornerstone of artificial intelligence (AI) research. Here, we incorporated recent neuroscientific findings of self-inhibiting autapse and neuron heterogeneity for innovating a spiking neural network (SNN) with enhanced learning and memorizing capacities. A bi-level programming paradigm was formulated to respectively learn neuron-level biophysical variables and network-level synapse weights for nested heterogeneous learning.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!