The domain gap caused mainly by variable medical image quality renders a major obstacle on the path between training a segmentation model in the lab and applying the trained model to unseen clinical data. To address this issue, domain generalization methods have been proposed, which however usually use static convolutions and are less flexible. In this paper, we propose a multi-source domain generalization model based on the domain and content adaptive convolution (DCAC) for the segmentation of medical images across different modalities. Specifically, we design the domain adaptive convolution (DAC) module and content adaptive convolution (CAC) module and incorporate both into an encoder-decoder backbone. In the DAC module, a dynamic convolutional head is conditioned on the predicted domain code of the input to make our model adapt to the unseen target domain. In the CAC module, a dynamic convolutional head is conditioned on the global image features to make our model adapt to the test image. We evaluated the DCAC model against the baseline and four state-of-the-art domain generalization methods on the prostate segmentation, COVID-19 lesion segmentation, and optic cup/optic disc segmentation tasks. Our results not only indicate that the proposed DCAC model outperforms all competing methods on each segmentation task but also demonstrate the effectiveness of the DAC and CAC modules. Code is available at https://git.io/DCAC.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TMI.2022.3210133 | DOI Listing |
Sci Rep
January 2025
Department of Biomedical Engineering, School of Life Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China.
The cervical cell classification technique can determine the degree of cellular abnormality and pathological condition, which can help doctors to detect the risk of cervical cancer at an early stage and improve the cure and survival rates of cervical cancer patients. Addressing the issue of low accuracy in cervical cell classification, a deep convolutional neural network A2SDNet121 is proposed. A2SDNet121 takes DenseNet121 as the backbone network.
View Article and Find Full Text PDFSci Total Environ
January 2025
Universidad de Santiago de Chile, Santiago, Chile.
Assessing future snow cover changes is challenging because the high spatial resolution required is typically unavailable from climate models. This study, therefore, proposes an alternative approach to estimating snow changes by developing a super-spatial-resolution downscaling model of snow depth (SD) for Japan using a convolutional neural network (CNN)-based method, and by downscaling an ensemble of models from the Coupled Model Intercomparison Project Phase 6 (CMIP6) dataset. After assessing the coherence of the observed reference SD dataset with independent observations, we leveraged it to train the CNN downscaling model; following its evaluation, we applied the trained model to CMIP6 climate simulations.
View Article and Find Full Text PDFSensors (Basel)
January 2025
College of Communication Engineering, Jilin University, Changchun 130012, China.
A moving ground-target recognition system can monitor suspicious activities of pedestrians and vehicles in key areas. Currently, most target recognition systems are based on devices such as fiber optics, radar, and vibration sensors. A system based on vibration sensors has the advantages of small size, low power consumption, strong concealment, easy installation, and low power consumption.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Mechanical and Aerospace Engineering, Politecnico di Torino, 10129 Turin, Italy.
This study investigates the potential of deploying a neural network model on an advanced programmable logic controller (PLC), specifically the Finder Opta™, for real-time inference within the predictive maintenance framework. In the context of Industry 4.0, edge computing aims to process data directly on local devices rather than relying on a cloud infrastructure.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Free-Space Optical Communication Technology Research Center, Harbin Institute of Technology, Harbin 150001, China.
To achieve real-time deep learning wavefront sensing (DLWFS) of dynamic random wavefront distortions induced by atmospheric turbulence, this study proposes an enhanced wavefront sensing neural network (WFSNet) based on convolutional neural networks (CNN). We introduce a novel multi-objective neural architecture search (MNAS) method designed to attain Pareto optimality in terms of error and floating-point operations (FLOPs) for the WFSNet. Utilizing EfficientNet-B0 prototypes, we propose a WFSNet with enhanced neural architecture which significantly reduces computational costs by 80% while improving wavefront sensing accuracy by 22%.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!