Image fusion is a very practical technology that can be applied in many fields, such as medicine, remote sensing and surveillance. An image fusion method using multi-scale decomposition and joint sparse representation is introduced in this paper. First, joint sparse representation is applied to decompose two source images into a common image and two innovation images. Second, two initial weight maps are generated by filtering the two source images separately. Final weight maps are obtained by joint bilateral filtering according to the initial weight maps. Then, the multi-scale decomposition of the innovation images is performed through the rolling guide filter. Finally, the final weight maps are used to generate the fused innovation image. The fused innovation image and the common image are combined to generate the ultimate fused image. The experimental results show that our method's average metrics are: mutual information ( M I )-5.3377, feature mutual information ( F M I )-0.5600, normalized weighted edge preservation value ( Q A B / F )-0.6978 and nonlinear correlation information entropy ( N C I E )-0.8226. Our method can achieve better performance compared to the state-of-the-art methods in visual perception and objective quantification.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7516424PMC
http://dx.doi.org/10.3390/e22010118DOI Listing

Publication Analysis

Top Keywords

weight maps
16
image fusion
12
joint sparse
12
sparse representation
12
multi-scale decomposition
8
source images
8
common image
8
innovation images
8
initial weight
8
final weight
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!