Ecotoxicol Environ Saf
November 2024
Recent works have demonstrated that transformer can achieve promising performance in computer vision, by exploiting the relationship among image patches with self-attention. They only consider the attention in a single feature layer, but ignore the complementarity of attention in different layers. In this article, we propose broad attention to improve the performance by incorporating the attention relationship of different layers for vision transformer (ViT), which is called BViT.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2022
Efficient neural architecture search (ENAS) achieves novel efficiency for learning architecture with high-performance via parameter sharing and reinforcement learning (RL). In the phase of architecture search, ENAS employs deep scalable architecture as search space whose training process consumes most of the search cost. Moreover, time-consuming model training is proportional to the depth of deep scalable architecture.
View Article and Find Full Text PDF