Nanomaterials (Basel)
January 2025
Local learning algorithms, such as Equilibrium Propagation (EP), have emerged as alternatives to global learning methods like backpropagation for training neural networks. EP offers the potential for more energy-efficient hardware implementation by utilizing only local neuron information for weight updates. However, the practical implementation of EP using memristor-based circuits has significant challenges due to the immature fabrication processes of memristors, resulting in defects and variability issues.
View Article and Find Full Text PDFFor processing streaming events from a Dynamic Vision Sensor camera, two types of neural networks can be considered. One are spiking neural networks, where simple spike-based computation is suitable for low-power consumption, but the discontinuity in spikes can make the training complicated in terms of hardware. The other one are digital Complementary Metal Oxide Semiconductor (CMOS)-based neural networks that can be trained directly using the normal backpropagation algorithm.
View Article and Find Full Text PDFEquilibrium propagation (EP) has been proposed recently as a new neural network training algorithm based on a local learning concept, where only local information is used to calculate the weight update of the neural network. Despite the advantages of local learning, numerical iteration for solving the EP dynamic equations makes the EP algorithm less practical for realizing edge intelligence hardware. Some analog circuits have been suggested to solve the EP dynamic equations physically, not numerically, using the original EP algorithm.
View Article and Find Full Text PDF