Training networks consisting of biophysically accurate neuron models could allow for new insights into how brain circuits can organize and solve tasks. We begin by analyzing the extent to which the central algorithm for neural network learning -- stochastic gradient descent through backpropagation (BP) -- can be used to train such networks. We find that properties of biophysically based neural network models needed for accurate modelling such as stiffness, high nonlinearity and long evaluation timeframes relative to spike times makes BP unstable and divergent in a variety of cases. To address these instabilities and inspired by recent work, we investigate the use of "gradient-estimating" evolutionary algorithms (EAs) for training biophysically based neural networks. We find that EAs have several advantages making them desirable over direct BP, including being forward-pass only, robust to noisy and rigid losses, allowing for discrete loss formulations, and potentially facilitating a more global exploration of parameters. We apply our method to train a recurrent network of Morris-Lecar neuron models on a stimulus integration and working memory task, and show how it can succeed in cases where direct BP is inapplicable. To expand on the viability of EAs in general, we apply them to a general neural ODE problem and a stiff neural ODE benchmark and find again that EAs can out-perform direct BP here, especially for the over-parameterized regime. Our findings suggest that biophysical neurons could provide useful benchmarks for testing the limits of BP-adjacent methods, and demonstrate the viability of EAs for training networks with complex components.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10690297PMC

Publication Analysis

Top Keywords

evolutionary algorithms
8
neural networks
8
training networks
8
neuron models
8
neural network
8
networks find
8
biophysically based
8
based neural
8
eas training
8
find eas
8

Similar Publications

Fuzzy logic applied to tunning mutation size in evolutionary algorithms.

Sci Rep

January 2025

Faculty of Physics and Applied Informatics, University of Łódź, Pomorska 149/153, Łódź, 90-236, Poland.

Tuning of parameters is a very important but complex issue in the Evolutionary Algorithms' design. The paper discusses the new, based on the Fuzzy Logic concept of tuning mutation size in these algorithms. Data on evolution collected in prior generations are used to tune the size of mutations.

View Article and Find Full Text PDF

Unmanned aerial vehicle (UAV) path planning is a constrained multi-objective optimization problem. With the increasing scale of UAV applications, finding an efficient and safe path in complex real-world environments is crucial. However, existing particle swarm optimization (PSO) algorithms struggle with these problems as they fail to consider UAV dynamics, resulting in many infeasible solutions and poor convergence to optimal solutions.

View Article and Find Full Text PDF

Persistent, Private and Mobile genes: a model for gene dynamics in evolving pangenomes.

Mol Biol Evol

January 2025

Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France.

The pangenome of a species is the set of all genes carried by at least one member of the species. In bacteria, pangenomes can be much larger than the set of genes carried by a single organism. Many questions remain unanswered regarding the evolutionary forces shaping the patterns of presence/absence of genes in pangenomes of a given species.

View Article and Find Full Text PDF

MOANA: Multi-objective ant nesting algorithm for optimization problems.

Heliyon

January 2025

Centre for Artificial Intelligence Research and Optimisation, Torrens University, Brisbane, QLD, 4006, QLD 4006, Austral, Australia.

This paper presents the Multi-Objective Ant Nesting Algorithm (MOANA), a novel extension of the Ant Nesting Algorithm (ANA), specifically designed to address multi-objective optimization problems (MOPs). MOANA incorporates adaptive mechanisms, such as deposition weight parameters, to balance exploration and exploitation, while a polynomial mutation strategy ensures diverse and high-quality solutions. The algorithm is evaluated on standard benchmark datasets, including ZDT functions and the IEEE Congress on Evolutionary Computation (CEC) 2019 multi-modal benchmarks.

View Article and Find Full Text PDF

Feature selection (FS) is a critical step in hyperspectral image (HSI) classification, essential for reducing data dimensionality while preserving classification accuracy. However, FS for HSIs remains an NP-hard challenge, as existing swarm intelligence and evolutionary algorithms (SIEAs) often suffer from limited exploration capabilities or susceptibility to local optima, particularly in high-dimensional scenarios. To address these challenges, we propose GWOGA, a novel hybrid algorithm that combines Grey Wolf Optimizer (GWO) and Genetic Algorithm (GA), aiming to achieve an effective balance between exploration and exploitation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!