Obtaining validation data and comparison metrics for segmentation of magnetic resonance images (MRI) are difficult tasks due to the lack of reliable ground truth. This problem is even more evident for images presenting pathology, which can both alter tissue appearance through infiltration and cause geometric distortions. Systems for generating synthetic images with user-defined degradation by noise and intensity inhomogeneity offer the possibility for testing and comparison of segmentation methods. Such systems do not yet offer simulation of sufficiently realistic looking pathology. This paper presents a system that combines physical and statistical modeling to generate synthetic multi-modal 3D brain MRI with tumor and edema, along with the underlying anatomical ground truth, Main emphasis is placed on simulation of the major effects known for tumor MRI, such as contrast enhancement, local distortion of healthy tissue, infiltrating edema adjacent to tumors, destruction and deformation of fiber tracts, and multi-modal MRI contrast of healthy tissue and pathology. The new method synthesizes pathology in multi-modal MRI and diffusion tensor imaging (DTI) by simulating mass effect, warping and destruction of white matter fibers, and infiltration of brain tissues by tumor cells. We generate synthetic contrast enhanced MR images by simulating the accumulation of contrast agent within the brain. The appearance of the the brain tissue and tumor in MRI is simulated by synthesizing texture images from real MR images. The proposed method is able to generate synthetic ground truth and synthesized MR images with tumor and edema that exhibit comparable segmentation challenges to real tumor MRI. Such image data sets will find use in segmentation reliability studies, comparison and validation of different segmentation methods, training and teaching, or even in evaluating standards for tumor size like the RECIST criteria (response evaluation criteria in solid tumors).
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2660387 | PMC |
http://dx.doi.org/10.1016/j.media.2008.11.002 | DOI Listing |
Sci Rep
December 2024
Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Canada.
Accurate diagnosis of oral lesions, early indicators of oral cancer, is a complex clinical challenge. Recent advances in deep learning have demonstrated potential in supporting clinical decisions. This paper introduces a deep learning model for classifying oral lesions, focusing on accuracy, interpretability, and reducing dataset bias.
View Article and Find Full Text PDFSimpl Med Ultrasound (2024)
October 2024
Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA.
We propose in this paper a texture-invariant 2D keypoints descriptor specifically designed for matching preoperative Magnetic Resonance (MR) images with intraoperative Ultrasound (US) images. We introduce a strategy, where intraoperative US images are synthesized from MR images accounting for multiple MR modalities and intraoperative US variability. We build our training set by enforcing keypoints localization over all images then train a patient-specific descriptor network that learns texture-invariant discriminant features in a supervised contrastive manner, leading to robust keypoints descriptors.
View Article and Find Full Text PDFHeart Rhythm
December 2024
School of Biomedical Engineering and Imaging Sciences, King's College London, UK.
Background: Electrocardiographic imaging (ECGi) is a non-invasive technique for ventricular tachycardia (VT) ablation planning. However, it is limited to reconstructing epicardial surface activation. In-silico pace mapping combines a personalized computational model with clinical electrocardiograms (ECGs) to generate a virtual 3D pace map.
View Article and Find Full Text PDFInt J Biomed Imaging
December 2024
Department of Computer Science & Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education (MAHE) 576104, Manipal, Karnataka, India.
Generative models, especially diffusion models, have gained traction in image generation for their high-quality image synthesis, surpassing generative adversarial networks (GANs). They have shown to excel in anomaly detection by modeling healthy reference data for scoring anomalies. However, one major disadvantage of these models is its sampling speed, which so far has made it unsuitable for use in time-sensitive scenarios.
View Article and Find Full Text PDFFront Neurol
December 2024
CLAIM - Charité Lab for AI in Medicine, Charité Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Berlin, Germany.
Introduction: Radiological scores used to assess the extent of subarachnoid hemorrhage are limited by intrarater and interrater variability and do not utilize all available information from the imaging. Image segmentation enables precise identification and delineation of objects or regions of interest and offers the potential for automatization of score assessments using precise volumetric information. Our study aims to develop a deep learning model that enables automated multiclass segmentation of structures and pathologies relevant for aneurysmal subarachnoid hemorrhage outcome prediction.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!