Publications by authors named "Guangshe Zhao"

By mimicking the neurons and synapses of the human brain and employing spiking neural networks on neuromorphic chips, neuromorphic computing offers a promising energy-efficient machine intelligence. How to borrow high-level brain dynamic mechanisms to help neuromorphic computing achieve energy advantages is a fundamental issue. This work presents an application-oriented algorithm-software-hardware co-designed neuromorphic system for this issue.

View Article and Find Full Text PDF

Event-based visual, a new visual paradigm with bio-inspired dynamic perception and μs level temporal resolution, has prominent advantages in many specific visual scenarios and gained much research interest. Spiking neural network (SNN) is naturally suitable for dealing with event streams due to its temporal information processing capability and event-driven nature. However, existing works SNN neglect the fact that the input event streams are spatially sparse and temporally non-uniform, and just treat these variant inputs equally.

View Article and Find Full Text PDF

Brain-inspired spiking neural networks (SNNs) are becoming a promising energy-efficient alternative to traditional artificial neural networks (ANNs). However, the performance gap between SNNs and ANNs has been a significant hindrance to deploying SNNs ubiquitously. To leverage the full potential of SNNs, in this paper we study the attention mechanisms, which can help human focus on important information.

View Article and Find Full Text PDF

Bio-inspired recipes are being introduced to artificial neural networks for the efficient processing of spatio-temporal tasks. Among them, Leaky Integrate and Fire (LIF) model is the most remarkable one thanks to its temporal processing capability, lightweight model structure, and well investigated direct training methods. However, most learnable LIF networks generally take neurons as independent individuals that communicate via chemical synapses, leaving electrical synapses all behind.

View Article and Find Full Text PDF

Deep neural network (DNN) compression has become a hot topic in the research of deep learning since the scale of modern DNNs turns into too huge to implement on practical resource constrained platforms such as embedded devices. Among variant compression methods, tensor decomposition appears to be a relatively simple and efficient strategy owing to its solid mathematical foundations and regular data structure. Generally, tensorizing neural weights into higher-order tensors for better decomposition, and directly mapping efficient tensor structure to neural architecture with nonlinear activation functions, are the two most common ways.

View Article and Find Full Text PDF

Recurrent neural networks (RNNs) are powerful in the tasks oriented to sequential data, such as natural language processing and video recognition. However, because the modern RNNs have complex topologies and expensive space/computation complexity, compressing them becomes a hot and promising topic in recent years. Among plenty of compression methods, tensor decomposition, e.

View Article and Find Full Text PDF

Relying on the rapidly increasing capacity of computing clusters and hardware, convolutional neural networks (CNNs) have been successfully applied in various fields and achieved state-of-the-art results. Despite these exciting developments, the huge memory cost is still involved in training and inferring a large-scale CNN model and makes it hard to be widely used in resource-limited portable devices. To address this problem, we establish a training framework for three-dimensional convolutional neural networks (3DCNNs) named QTTNet that combines tensor train (TT) decomposition and data quantization together for further shrinking the model size and decreasing the memory and time cost.

View Article and Find Full Text PDF

Deep neural networks (DNNs) have enabled impressive breakthroughs in various artificial intelligence (AI) applications recently due to its capability of learning high-level features from big data. However, the current demand of DNNs for computational resources especially the storage consumption is growing due to that the increasing sizes of models are being required for more and more complicated applications. To address this problem, several tensor decomposition methods including tensor-train (TT) and tensor-ring (TR) have been applied to compress DNNs and shown considerable compression effectiveness.

View Article and Find Full Text PDF

Three-dimensional convolutional neural networks (3DCNNs) have been applied in many tasks, e.g., video and 3D point cloud recognition.

View Article and Find Full Text PDF

Artificial neural networks (ANNs), a popular path towards artificial intelligence, have experienced remarkable success via mature models, various benchmarks, open-source datasets, and powerful computing platforms. Spiking neural networks (SNNs), a category of promising models to mimic the neuronal dynamics of the brain, have gained much attention for brain inspired computing and been widely deployed on neuromorphic devices. However, for a long time, there are ongoing debates and skepticisms about the value of SNNs in practical applications.

View Article and Find Full Text PDF

This paper presents an approach to seeker-azimuth determination using the gyro rotor and optoelectronic sensors. In the proposed method, the gyro rotor is designed with a set of black and white right spherical triangle patterns on its surface. Two pairs of optoelectronic sensors are located symmetrically around the gyro rotor.

View Article and Find Full Text PDF

In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1) Prediction of the age distribution of a country's population based on an "age-structured population model"; 2) Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3) Estimation the number of each household size based on a "total household size model".

View Article and Find Full Text PDF