The rapid advancement of new technologies has resulted in a surge of data, while conventional computers are nearing their computational limits. The prevalent von Neumann architecture, where processing and storage units operate independently, faces challenges such as data migration through buses, leading to decreased computing speed and increased energy loss. Ongoing research aims to enhance computing capabilities through the development of innovative chips and the adoption of new system architectures.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
March 2024
The quantization of synaptic weights using emerging nonvolatile memory (NVM) devices has emerged as a promising solution to implement computationally efficient neural networks on resource constrained hardware. However, the practical implementation of such synaptic weights is hampered by the imperfect memory characteristics, specifically the availability of limited number of quantized states and the presence of large intrinsic device variation and stochasticity involved in writing the synaptic states. This article presents on-chip training and inference of a neural network using quantized magnetic domain wall (DW)-based synaptic array and CMOS peripheral circuits.
View Article and Find Full Text PDFStochastic neuromorphic computation (SNC) has the potential to enable a low power, error tolerant and scalable computing platform in comparison to its deterministic counterparts. However, the hardware implementation of complementary metal oxide semiconductor (CMOS)-based stochastic circuits involves conversion blocks that cost more than the actual processing circuits. The realization of the activation function for SNCs also requires a complicated circuit that results in a significant amount of power dissipation and area overhead.
View Article and Find Full Text PDF