There would be the differences in spectra, scale and resolution between the Remote Sensing datasets of the source and target domains, which would lead to the degradation of the cross-domain segmentation performance of the model. Image transfer faced two problems in the process of domain-adaptive learning: overly focusing on style features while ignoring semantic information, leading to biased transformation results, and easily overlooking the true transfer characteristics of remote sensing images, resulting in unstable model training. To address these issues, we proposes a novel dual-space generative adversarial domain adaptation segmentation framework, DS-DWTGAN, to minimize the differences between the source domain and the target domain.
View Article and Find Full Text PDF