Visual attention prediction (VAP) is a significant and imperative issue in the field of computer vision. Most of the existing VAP methods are based on deep learning. However, they do not fully take advantage of the low-level contrast features while generating the visual attention map. In this article, a novel VAP method is proposed to generate the visual attention map via bio-inspired representation learning. The bio-inspired representation learning combines both low-level contrast and high-level semantic features simultaneously, which are developed by the fact that the human eye is sensitive to the patches with high contrast and objects with high semantics. The proposed method is composed of three main steps: 1) feature extraction; 2) bio-inspired representation learning; and 3) visual attention map generation. First, the high-level semantic feature is extracted from the refined VGG16, while the low-level contrast feature is extracted by the proposed contrast feature extraction block in a deep network. Second, during bio-inspired representation learning, both the extracted low-level contrast and high-level semantic features are combined by the designed densely connected block, which is proposed to concatenate various features scale by scale. Finally, the weighted-fusion layer is exploited to generate the ultimate visual attention map based on the obtained representations after bio-inspired representation learning. Extensive experiments are performed to demonstrate the effectiveness of the proposed method.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TCYB.2019.2931735 | DOI Listing |
Sensors (Basel)
December 2024
School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510275, China.
Security is one of the increasingly significant issues given advancements in technology that harness data from multiple devices such as the internet of medical devices. While protecting data from unauthorized user access, several techniques are used including fingerprints, passwords, and others. One of the techniques that has attracted much attention is the use of human features, which has proven to be most effective because of the difficulties in impersonating human-related features.
View Article and Find Full Text PDFSensors (Basel)
November 2024
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China.
Neural Netw
February 2025
Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; Center for Long-term Artificial Intelligence, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, 100049, China; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China. Electronic address:
Inspired by the brain's information processing using binary spikes, spiking neural networks (SNNs) offer significant reductions in energy consumption and are more adept at incorporating multi-scale biological characteristics. In SNNs, spiking neurons serve as the fundamental information processing units. However, in most models, these neurons are typically simplified, focusing primarily on the leaky integrate-and-fire (LIF) point neuron model while neglecting the structural properties of biological neurons.
View Article and Find Full Text PDFBioinspir Biomim
December 2024
National Key Laboratory of Underwater Acoustic Technology, Harbin Engineering University, Harbin 15001, People's Republic of China.
IEEE Trans Image Process
October 2024
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!