We seek the development and evaluation of a fast, accurate, and consistent method for general-purpose segmentation, based on interactive machine learning (IML). To validate our method, we identified retrospective cohorts of 20 brain, 50 breast, and 50 lung cancer patients, as well as 20 spleen scans, with corresponding ground truth annotations. Utilizing very brief user training annotations and the adaptive geodesic distance transform, an ensemble of SVMs is trained, providing a patient-specific model applied to the whole image. Two experts segmented each cohort twice with our method and twice manually. The IML method was faster than manual annotation by 53.1% on average. We found significant ( < 0.001) overlap difference for spleen (Dice/Dice = 0.91/0.87), breast tumors (Dice/Dice = 0.84/0.82), and lung nodules (Dice/Dice = 0.78/0.83). For intra-rater consistency, a significant ( = 0.003) difference was found for spleen (Dice/Dice = 0.91/0.89). For inter-rater consistency, significant ( < 0.045) differences were found for spleen (Dice/Dice = 0.91/0.87), breast (Dice/Dice = 0.86/0.81), lung (Dice/Dice = 0.85/0.89), the non-enhancing (Dice/Dice = 0.79/0.67) and the enhancing (Dice/Dice = 0.79/0.84) brain tumor sub-regions, which, in aggregation, favored our method. Quantitative evaluation for speed, spatial overlap, and consistency, reveals the benefits of our proposed method when compared with manual annotation, for several clinically relevant problems. We publicly release our implementation through CaPTk (Cancer Imaging Phenomics Toolkit) and as an MITK plugin.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8494410 | PMC |
http://dx.doi.org/10.3390/app11167488 | DOI Listing |
Sci Rep
January 2025
School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran.
This paper introduces a novel method for spleen segmentation in ultrasound images, using a two-phase training approach. In the first phase, the SegFormerB0 network is trained to provide an initial segmentation. In the second phase, the network is further refined using the Pix2Pix structure, which enhances attention to details and corrects any erroneous or additional segments in the output.
View Article and Find Full Text PDFPhys Med
January 2025
Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete, Greece.
Purpose: To investigate the performance of a machine learning-based segmentation method for treatment planning of gastric cancer.
Materials And Methods: Eighteen patients planned to be irradiated for gastric cancer were studied. The target and the surrounding organs-at-risk (OARs) were manually delineated on CT scans.
Phys Med Biol
January 2025
Radiology, Stanford University, 1201 Welch Rd, P270, Stanford, California, 94305-6104, UNITED STATES.
Radiation dose and diagnostic image quality are opposing constraints in x-ray CT. Conventional methods do not fully account for organ-level radiation dose and noise when considering radiation risk and clinical task. In this work, we develop a pipeline to generate individualized organ-specific dose and noise at desired dose levels from clinical CT scans.
View Article and Find Full Text PDFToxicol Pathol
December 2024
AbbVie Inc., North Chicago, Illinois, USA.
Enhanced histopathology of the immune system uses a precise, compartment-specific, and semi-quantitative evaluation of lymphoid organs in toxicology studies. The assessment of lymphocyte populations in tissues is subject to sampling variability and limited distinctive cytologic features of lymphocyte subpopulations as seen with hematoxylin and eosin (H&E) staining. Although immunohistochemistry is necessary for definitive characterization of T- and B-cell compartments, routine toxicologic assessments are based solely on H&E slides.
View Article and Find Full Text PDFJ Imaging Inform Med
December 2024
Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA.
To develop and validate a modality-invariant Swin U-Net Transformer (UNETR) deep learning model for liver and spleen segmentation on abdominal T1-weighted (T1w) or T2-weighted (T2w) MR images from multiple institutions for pediatric and adult patients with known or suspected chronic liver diseases. In this IRB-approved retrospective study, clinical abdominal axial T1w and T2w MR images from pediatric and adult patients were retrieved from four study sites, including Cincinnati Children's Hospital Medical Center (CCHMC), New York University (NYU), University of Wisconsin (UW) and University of Michigan / Michigan Medicine (UM). The whole liver and spleen were manually delineated as the ground truth masks.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!