Pan-sharpening is a widely employed technique for enhancing the quality and accuracy of remote sensing images, particularly in high-resolution image downstream tasks. However, existing deep-learning methods often neglect the self-similarity in remote sensing images. Ignoring it can result in poor fusion of texture and spectral details, leading to artifacts like ringing and reduced clarity in the fused image. To address these limitations, we propose the Symmetric Multi-Scale Correction-Enhancement Transformers (SMCET) model. SMCET incorporates a Self-Similarity Refinement Transformers (SSRT) module to capture self-similarity from frequency and spatial domain within a single scale, and an encoder-decoder framework to employ multi-scale transformations to simulate the self-similarity process across scales. Our experiments on multiple satellite datasets demonstrate that SMCET outperforms existing methods, offering superior texture and spectral details. The SMCET source code can be accessed at https://github.com/yonglleee/SMCET.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2025.107226 | DOI Listing |
J Imaging
February 2025
Academy for Engineering and Technology, Fudan University, Shanghai 200433, China.
Prostate cancer, a prevalent malignancy affecting males globally, underscores the critical need for precise prostate segmentation in diagnostic imaging. However, accurate delineation via MRI still faces several challenges: (1) The distinction of the prostate from surrounding soft tissues is impeded by subtle boundaries in MRI images. (2) Regions such as the apex and base of the prostate exhibit inherent blurriness, which complicates edge extraction and precise segmentation.
View Article and Find Full Text PDFUltrasound Med Biol
February 2025
College of Mechanical Engineering, University of South China, Hengyang, Hunan, China.
Ultrasound (US) images have the advantages of no radiation, high penetration, and real-time imaging, and optical coherence tomography (OCT) has the advantage of high resolution. The purpose of fusing endometrial images from optical coherence tomography (OCT) and ultrasound (US) is to combine the advantages of different modalities to ultimately obtain more complete information on endometrial thickness. To better integrate multimodal images, we first proposed a Symmetric Dual-branch Residual Dense (SDRD-Net) network for OCT and US endometrial image fusion.
View Article and Find Full Text PDFNeural Netw
May 2025
Institute of Remote Sensing and Geographic Information System, School of Earth and Space Sciences, Peking University, Beijing, 100871, China. Electronic address:
Pan-sharpening is a widely employed technique for enhancing the quality and accuracy of remote sensing images, particularly in high-resolution image downstream tasks. However, existing deep-learning methods often neglect the self-similarity in remote sensing images. Ignoring it can result in poor fusion of texture and spectral details, leading to artifacts like ringing and reduced clarity in the fused image.
View Article and Find Full Text PDFJ Cell Mol Med
December 2024
School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan, China.
Convolutional neural networks (CNNs) are well established in handling local features in visual tasks; yet, they falter in managing complex spatial relationships and long-range dependencies that are crucial for medical image segmentation, particularly in identifying pathological changes. While vision transformer (ViT) excels in addressing long-range dependencies, their ability to leverage local features remains inadequate. Recent ViT variants have merged CNNs to improve feature representation and segmentation outcomes, yet challenges with limited receptive fields and precise feature representation persist.
View Article and Find Full Text PDFMed Phys
February 2025
Centre for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong SAR, China.
Background: Recently, many studies have explored fusing features extracted from Convolutional neural networks (CNNs) and transformers to integrate multi-scale representations for better performance in medical image segmentation tasks. Although these hybrid models have achieved better results than previous CNN-based and transformer-based methods, they suffer from high computation and space complexities.
Purpose: The purpose of this research is to address the prohibitive computation and space complexities of hybrid models, which limit their application in clinical practice where computational resources are usually constrained.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!