Objective: Effective image segmentation of cerebral structures is fundamental to 3-dimensional techniques such as augmented reality. To be clinically viable, segmentation algorithms should be fully automatic and easily integrated in existing digital infrastructure. We created a fully automatic adaptive-meshing-based segmentation system for T1-weighted magnetic resonance images (MRI) to automatically segment the complete ventricular system, running in a cloud-based environment that can be accessed on an augmented reality device. This study aims to assess the accuracy and segmentation time of the system by comparing it to a manually segmented ground truth dataset.
Methods: A ground truth (GT) dataset of 46 contrast-enhanced and non-contrast-enhanced T1-weighted MRI scans was manually segmented. These scans also were uploaded to our system to create a machine-segmented (MS) dataset. The GT data were compared with the MS data using the Sørensen-Dice similarity coefficient and 95% Hausdorff distance to determine segmentation accuracy. Furthermore, segmentation times for all GT and MS segmentations were measured.
Results: Automatic segmentation was successful for 45 (98%) of 46 cases. Mean Sørensen-Dice similarity coefficient score was 0.83 (standard deviation [SD] = 0.08) and mean 95% Hausdorff distance was 19.06 mm (SD = 11.20). Segmentation time was significantly longer for the GT group (mean = 14405 seconds, SD = 7089) when compared with the MS group (mean = 1275 seconds, SD = 714) with a mean difference of 13,130 seconds (95% confidence interval 10,130-16,130).
Conclusions: The described adaptive meshing-based segmentation algorithm provides accurate and time-efficient automatic segmentation of the ventricular system from T1 MRI scans and direct visualization of the rendered surface models in augmented reality.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.wneu.2021.07.099 | DOI Listing |
Eur Radiol Exp
January 2025
Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA.
Background: This retrospective study aims to evaluate the impact of a content-based image retrieval (CBIR) application on diagnostic accuracy and confidence in interstitial lung disease (ILD) assessment using high-resolution computed tomography CT (HRCT).
Methods: Twenty-eight patients with verified pattern-based ILD diagnoses were split into two equal datasets (1 and 2). The images were assessed by two radiology residents (3rd and 5th year) and one expert radiologist in four sessions.
Int J Dermatol
January 2025
Department of Dermatology, Yale University School of Medicine, New Haven, Connecticut, USA.
Comput Struct Biotechnol J
December 2024
Centre for Mobile Innovation (CMI), Sheridan College, Oakville, Ontario, Canada.
In this paper, we introduce -a Mixed Reality (MR) system designed for healthcare professionals to monitor patients in wards or clinics. We detail the design, development, and evaluation of , which integrates real-time vital signs from a biosensor-equipped wearable, . The system generates holographic visualizations, allowing healthcare professionals to interact with medical charts and information panels holographically.
View Article and Find Full Text PDFCureus
January 2025
Edinburgh Medical School, The University of Edinburgh, Edinburgh, GBR.
Over the past few decades, technological advancements have established digital tools as an indispensable pedagogical resource in the realm of modern education. In the field of medical education, there is growing interest in how these digital tools can be effectively integrated to enhance undergraduate surgical education. However, despite their well-documented potential benefits, research specifically investigating the current use of digital technology in undergraduate surgical education remains limited, highlighting a critical gap in the existing literature.
View Article and Find Full Text PDFAdv Mater
January 2025
Division of Materials Science and Engineering, Hanyang University, Seoul, 04763, Republic of Korea.
The evolution of display technologies is rapidly transitioning from traditional screens to advanced augmented reality (AR)/virtual reality (VR) and wearable devices, where quantum dots (QDs) serve as crucial pure-color emitters. While solution processing efficiently forms QD solids, challenges emerge in subsequent stages, such as layer deposition, etching, and solvent immersion. These issues become especially pronounced when developing diverse form factors, necessitating innovative patterning methods that are both reversible and sustainable.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!