Segmentation of the thoracic region and breast tissues is crucial for analyzing and diagnosing the presence of breast masses. This paper introduces a medical image segmentation architecture that aggregates two neural networks based on the state-of-the-art nnU-Net. Additionally, this study proposes a polyvinyl alcohol cryogel (PVA-C) breast phantom, based on its automated segmentation approach, to enable planning and navigation experiments for robotic breast surgery. The dataset consists of multimodality breast MRI of T2W and STIR images obtained from 10 patients. A statistical analysis of segmentation tasks emphasizes the Dice Similarity Coefficient (DSC), segmentation accuracy, sensitivity, and specificity. We first use a single class labeling to segment the breast region and then exploit it as an input for three-class labeling to segment fatty, fibroglandular (FGT), and tumorous tissues. The first network has a 0.95 DCS, while the second network has a 0.95, 0.83, and 0.41 for fat, FGT, and tumor classes, respectively. Clinical Relevance-This research is relevant to the breast surgery community as it establishes a deep learning-based (DL) algorithmic and phantomic foundation for surgical planning and navigation that will exploit preoperative multimodal MRI and intraoperative ultrasound to achieve highly cosmetic breast surgery. In addition, the planning and navigation will guide a robot that can cut, resect, bag, and grasp a tissue mass that encapsulates breast tumors and positive tissue margins. This image-guided robotic approach promises to potentiate the accuracy of breast surgeons and improve patient outcomes.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC48229.2022.9871109 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!