IEEE Trans Biomed Circuits Syst
June 2019
Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources. However, the efficiency is often lost when one tries to port these findings to a silicon substrate, since brain-inspired algorithms often make extensive use of complex functions, such as random number generators, that are expensive to compute on standard general purpose hardware. The prototype chip of the second generation SpiNNaker system is designed to overcome this problem.
View Article and Find Full Text PDFThe memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints.
View Article and Find Full Text PDF