Publications by authors named "Malu Zhang"

Decoding visual and auditory stimuli from brain activities, such as electroencephalography (EEG), offers promising advancements for enhancing machine-to-human interaction. However, effectively representing EEG signals remains a significant challenge. In this paper, we introduce a novel Delayed Knowledge Transfer (DKT) framework that employs spiking neurons for attention detection, using our experimental EEG dataset.

View Article and Find Full Text PDF

Spiking Neural Networks (SNNs) hold great potential for mimicking the brain's efficient processing of information. Although biological evidence suggests that precise spike timing is crucial for effective information encoding, contemporary SNN research mainly concentrates on adjusting connection weights. In this work, we introduce Delay Learning based on Temporal Coding (DLTC), an innovative approach that integrates delay learning with a temporal coding strategy to optimize spike timing in SNNs.

View Article and Find Full Text PDF

Recent advances in bio-inspired vision with event cameras and associated spiking neural networks (SNNs) have provided promising solutions for low-power consumption neuromorphic tasks. However, as the research of event cameras is still in its infancy, the amount of labeled event stream data is much less than that of the RGB database. The traditional method of converting static images into event streams by simulation to increase the sample size cannot simulate the characteristics of event cameras such as high temporal resolution.

View Article and Find Full Text PDF

Spiking neural networks (SNNs) are attracting widespread interest due to their biological plausibility, energy efficiency, and powerful spatiotemporal information representation ability. Given the critical role of attention mechanisms in enhancing neural network performance, the integration of SNNs and attention mechanisms exhibits tremendous potential to deliver energy-efficient and high-performance computing paradigms. In this article, we present a novel temporal-channel joint attention mechanism for SNNs, referred to as TCJA-SNN.

View Article and Find Full Text PDF

Spiking Neural Networks (SNNs) have become one of the most prominent next-generation computational models owing to their biological plausibility, low power consumption, and the potential for neuromorphic hardware implementation. Among the various methods for obtaining available SNNs, converting Artificial Neural Networks (ANNs) into SNNs is the most cost-effective approach. The early challenges in ANN-to-SNN conversion work revolved around the susceptibility of converted SNNs to conversion errors.

View Article and Find Full Text PDF

Spiking neural networks (SNNs) are brain-inspired models that utilize discrete and sparse spikes to transmit information, thus having the property of energy efficiency. Recent advances in learning algorithms have greatly improved SNN performance due to the automation of feature engineering. While the choice of neural architecture plays a significant role in deep learning, the current SNN architectures are mainly designed manually, which is a time-consuming and error-prone process.

View Article and Find Full Text PDF

Traditional spiking learning algorithm aims to train neurons to spike at a specific time or on a particular frequency, which requires precise time and frequency labels in the training process. While in reality, usually only aggregated labels of sequential patterns are provided. The aggregate-label (AL) learning is proposed to discover these predictive features in distracting background streams only by aggregated spikes.

View Article and Find Full Text PDF

Word representations, usually derived from a large corpus and endowed with rich semantic information, have been widely applied to natural language tasks. Traditional deep language models, on the basis of dense word representations, requires large memory space and computing resource. The brain-inspired neuromorphic computing systems, with the advantages of better biological interpretability and less energy consumption, still have major difficulties in the representation of words in terms of neuronal activities, which has restricted their further application in more complicated downstream language tasks.

View Article and Find Full Text PDF

In the real world, information is often correlated with each other in the time domain. Whether it can effectively make a decision according to the global information is the key indicator of information processing ability. Due to the discrete characteristics of spike trains and unique temporal dynamics, spiking neural networks (SNNs) show great potential in applications in ultra-low-power platforms and various temporal-related real-life tasks.

View Article and Find Full Text PDF

Spiking neural networks (SNNs) have recently demonstrated outstanding performance in a variety of high-level tasks, such as image classification. However, advancements in the field of low-level assignments, such as image reconstruction, are rare. This may be due to the lack of promising image encoding techniques and corresponding neuromorphic devices designed specifically for SNN-based low-level vision problems.

View Article and Find Full Text PDF

Episodic memory is fundamental to the brain's cognitive function, but how neuronal activity is temporally organized during its encoding and retrieval is still unknown. In this article, combining hippocampus structure with a spiking neural network (SNN), a new bionic spiking temporal memory (BSTM) model is proposed to explore the encoding, formation, and retrieval of episodic memory. For encoding episodic memory, the spike-timing-dependent-plasticity (STDP) learning algorithm and a proposed minicolumn selection algorithm are used to encode each input item into several active minicolumns.

View Article and Find Full Text PDF

The brain-inspired spiking neural networks (SNNs) hold the advantages of lower power consumption and powerful computing capability. However, the lack of effective learning algorithms has obstructed the theoretical advance and applications of SNNs. The majority of the existing learning algorithms for SNNs are based on the synaptic weight adjustment.

View Article and Find Full Text PDF

Spiking neural networks (SNNs) have shown clear advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency, due to their event-driven nature and sparse communication. However, the training of deep SNNs is not straightforward. In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning.

View Article and Find Full Text PDF
Article Synopsis
  • Spiking neural networks (SNNs) leverage spike patterns for information processing, offering a biological approach that is energy-efficient for neuromorphic systems, but training them is challenging due to the inapplicability of traditional backpropagation algorithms.
  • The study introduces a novel spike-timing-dependent backpropagation (STDBP) method and a new rectified linear postsynaptic potential function (ReL-PSP), allowing DeepSNNs to learn effectively by focusing on the timing of spikes.
  • Experimental results indicate that DeepSNNs using the STDBP algorithm achieve high classification accuracy, while neuromorphic hardware implementing this model demonstrates ultra-low power consumption (0.751 mW) and quick image classification
View Article and Find Full Text PDF

Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures. However, due to the nondifferentiable nature of spiking neuronal functions, the standard error backpropagation algorithm is not directly applicable to SNNs. In this work, we propose a tandem learning framework that consists of an SNN and an artificial neural network (ANN) coupled through weight sharing.

View Article and Find Full Text PDF

Spiking neural networks (SNNs) are regarded as effective models for processing spatio-temporal information. However, their inherent complexity of temporal coding makes it an arduous task to put forward an effective supervised learning algorithm, which still puzzles researchers in this area. In this paper, we propose a Recursive Least Squares-Based Learning Rule (RLSBLR) for SNN to generate the desired spatio-temporal spike train.

View Article and Find Full Text PDF

Artificial neural networks (ANN) have become the mainstream acoustic modeling technique for large vocabulary automatic speech recognition (ASR). A conventional ANN features a multi-layer architecture that requires massive amounts of computation. The brain-inspired spiking neural networks (SNN) closely mimic the biological neural networks and can operate on low-power neuromorphic hardware with spike-based computation.

View Article and Find Full Text PDF

The auditory front-end is an integral part of a spiking neural network (SNN) when performing auditory cognitive tasks. It encodes the temporal dynamic stimulus, such as speech and audio, into an efficient, effective and reconstructable spike pattern to facilitate the subsequent processing. However, most of the auditory front-ends in current studies have not made use of recent findings in psychoacoustics and physiology concerning human listening.

View Article and Find Full Text PDF

Environmental sounds form part of our daily life. With the advancement of deep learning models and the abundance of training data, the performance of automatic sound classification (ASC) systems has improved significantly in recent years. However, the high computational cost, hence high power consumption, remains a major hurdle for large-scale implementation of ASC systems on mobile and wearable devices.

View Article and Find Full Text PDF

Spiking neurons are becoming increasingly popular owing to their biological plausibility and promising computational properties. Unlike traditional rate-based neural models, spiking neurons encode information in the temporal patterns of the transmitted spike trains, which makes them more suitable for processing spatiotemporal information. One of the fundamental computations of spiking neurons is to transform streams of input spike trains into precisely timed firing activity.

View Article and Find Full Text PDF

The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly.

View Article and Find Full Text PDF