Analog in-memory computing is a promising future technology for efficiently accelerating deep learning networks. While using in-memory computing to accelerate the inference phase has been studied extensively, accelerating the training phase has received less attention, despite its arguably much larger compute demand to accelerate. While some analog in-memory training algorithms have been suggested, they either invoke significant amount of auxiliary digital compute-accumulating the gradient in digital floating point precision, limiting the potential speed-up-or suffer from the need for near perfectly programming reference conductance values to establish an algorithmic zero point.
View Article and Find Full Text PDFBee venom with an antimicrobial effect is a powerful natural product. One of the most important areas where new antimicrobials are needed is in the prevention and control of multi-drug resistant pathogens. Today, antibacterial products used to treat multi-drug resistant pathogen infections in hospitals and healthcare facilities are insufficient to prevent colonisation and spread, and new products are needed.
View Article and Find Full Text PDFAnalog crossbar arrays comprising programmable non-volatile resistors are under intense investigation for acceleration of deep neural network training. However, the ubiquitous asymmetric conductance modulation of practical resistive devices critically degrades the classification performance of networks trained with conventional algorithms. Here we first describe the fundamental reasons behind this incompatibility.
View Article and Find Full Text PDFRecent progress in novel non-volatile memory-based synaptic device technologies and their feasibility for matrix-vector multiplication (MVM) has ignited active research on implementing analog neural network training accelerators with resistive crosspoint arrays. While significant performance boost as well as area- and power-efficiency is theoretically predicted, the realization of such analog accelerators is largely limited by non-ideal switching characteristics of crosspoint elements. One of the most performance-limiting non-idealities is the conductance update asymmetry which is known to distort the actual weight change values away from the calculation by error back-propagation and, therefore, significantly deteriorates the neural network training performance.
View Article and Find Full Text PDFDeep neural networks (DNNs) are typically trained using the conventional stochastic gradient descent (SGD) algorithm. However, SGD performs poorly when applied to train networks on non-ideal analog hardware composed of resistive device arrays with non-symmetric conductance modulation characteristics. Recently we proposed a new algorithm, the Tiki-Taka algorithm, that overcomes this stringent symmetry requirement.
View Article and Find Full Text PDF