AI Article Synopsis

Article Abstract

The purpose of infrared and visible image fusion is to combine the advantages of both and generate a fused image that contains target information and has rich details and contrast. However, existing fusion algorithms often overlook the importance of incorporating both local and global feature extraction, leading to missing key information in the fused image. To address these challenges, this paper proposes a dual-branch fusion network combining convolutional neural network (CNN) and Transformer, which enhances the feature extraction capability and motivates the fused image to contain more information. Firstly, a local feature extraction module with CNN as the core is constructed. Specifically, the residual gradient module is used to enhance the ability of the network to extract texture information. Also, jump links and coordinate attention are used in order to relate shallow features to deeper ones. In addition, a global feature extraction module based on Transformer is constructed. Through the powerful ability of Transformer, the global context information of the image can be captured and the global features are fully extracted. The effectiveness of the proposed method in this paper is verified on different experimental datasets, and it is better than most of the current advanced fusion algorithms.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11644922PMC
http://dx.doi.org/10.3390/s24237729DOI Listing

Publication Analysis

Top Keywords

feature extraction
16
fused image
12
cnn transformer
8
infrared visible
8
visible image
8
image fusion
8
fusion algorithms
8
global feature
8
extraction module
8
image
6

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!