A self-training spiking superconducting neuromorphic architecture.

Npj Unconv Comput

Department of Computer Science, Colorado State University, Fort Collins, CO 80523 USA.

Published: March 2025

Neuromorphic computing takes biological inspiration to the device level aiming to improve computational efficiency and capabilities. One of the major issues that arises is the training of neuromorphic hardware systems. Typically training algorithms require global information and are thus inefficient to implement directly in hardware. In this paper we describe a set of reinforcement learning based, local weight update rules and their implementation in superconducting hardware. Using SPICE circuit simulations, we implement a small-scale neural network with a learning time of order one nanosecond per update. This network can be trained to learn new functions simply by changing the target output for a given set of inputs, without the need for any external adjustments to the network. Further, this architecture does not require programing explicit weight values in the network, alleviating a critical challenge with analog hardware implementations of neural networks.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879878PMC
http://dx.doi.org/10.1038/s44335-025-00021-9DOI Listing

Publication Analysis

Top Keywords

self-training spiking
4
spiking superconducting
4
superconducting neuromorphic
4
neuromorphic architecture
4
architecture neuromorphic
4
neuromorphic computing
4
computing takes
4
takes biological
4
biological inspiration
4
inspiration device
4

Similar Publications

A self-training spiking superconducting neuromorphic architecture.

Npj Unconv Comput

March 2025

Department of Computer Science, Colorado State University, Fort Collins, CO 80523 USA.

Neuromorphic computing takes biological inspiration to the device level aiming to improve computational efficiency and capabilities. One of the major issues that arises is the training of neuromorphic hardware systems. Typically training algorithms require global information and are thus inefficient to implement directly in hardware.

View Article and Find Full Text PDF

While there is an abundance of research on neural networks that are "inspired" by the brain, few mimic the critical temporal compute features that allow the brain to efficiently perform complex computations. Even fewer methods emulate the heterogeneity of learning produced by biological neurons. Memory devices, such as memristors, are also investigated for their potential to implement neuronal functions in electronic hardware.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!