Formalin-fixation and paraffin-embedding (FFPE) is a technique for preparing and preserving tissue specimens that has been utilized in histopathology since the late 19th century. This process is further complicated by FFPE preparation steps such as fixation, processing, embedding, microtomy, staining, and coverslipping, which often results in artifacts due to the complex histological and cytological characteristics of a tissue specimen. The term "artifacts" includes, but is not limited to, staining inconsistencies, tissue folds, chattering, pen marks, blurring, air bubbles, and contamination. The presence of artifacts may interfere with pathological diagnosis in disease detection, subtyping, grading, and choice of therapy. In this study, we propose FFPE++, an unpaired image-to-image translation method based on contrastive learning with a mixed channel-spatial attention module and self-regularization loss that drastically corrects the aforementioned artifacts in FFPE tissue sections. Turing tests were performed by 10 board-certified pathologists with more than 10 years of experience. These tests which were performed for ovarian carcinoma, lung adenocarcinoma, lung squamous cell carcinoma, and papillary thyroid carcinoma, demonstrate the clear superiority of the proposed method in many clinical aspects compared with standard FFPE images. Based on the qualitative experiments and feedback from the Turing tests, we believe that FFPE++ can contribute to substantial diagnostic and prognostic accuracy in clinical pathology in the future and can also improve the performance of AI tools in digital pathology. The code and dataset are publicly available at https://github.com/DeepMIALab/FFPEPlus.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.media.2023.102992 | DOI Listing |
Biol Imaging
December 2024
Visual Information Laboratory, University of Bristol, Bristol, UK.
Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired OCT to confocal microscopy images.
View Article and Find Full Text PDFSupervised deep-learning models have enabled super-resolution imaging in several microscopic imaging modalities, increasing the spatial lateral bandwidth of the original input images beyond the diffraction limit. Despite their success, their practical application poses several challenges in terms of the amount of training data and its quality, requiring the experimental acquisition of large, paired databases to generate an accurate generalized model whose performance remains invariant to unseen data. Cycle-consistent generative adversarial networks (cycleGANs) are unsupervised models for image-to-image translation tasks that are trained on unpaired datasets.
View Article and Find Full Text PDFMed Image Anal
October 2024
Polytechnique Montreal, Montreal, QC, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada. Electronic address:
Deep neural networks are commonly used for automated medical image segmentation, but models will frequently struggle to generalize well across different imaging modalities. This issue is particularly problematic due to the limited availability of annotated data, both in the target as well as the source modality, making it difficult to deploy these models on a larger scale. To overcome these challenges, we propose a new semi-supervised training strategy called MoDATTS.
View Article and Find Full Text PDFIEEE Trans Med Imaging
July 2024
Electron microscopy (EM) image denoising is critical for visualization and subsequent analysis. Despite the remarkable achievements of deep learning-based non-blind denoising methods, their performance drops significantly when domain shifts exist between the training and testing data. To address this issue, unpaired blind denoising methods have been proposed.
View Article and Find Full Text PDFBioengineering (Basel)
June 2024
Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 0631, Republic of Korea.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!