: Capsule endoscopy (CE) for bowel cleanliness evaluation primarily depends on subjective methods. To objectively evaluate bowel cleanliness, we focused on artificial intelligence (AI)-based assessments. We aimed to generate a large segmentation dataset from CE images and verify its quality using a convolutional neural network (CNN)-based algorithm. : Images were extracted and divided into 10 stages according to the clean regions in a CE video. Each image was classified into three classes (clean, dark, and floats/bubbles) or two classes (clean and non-clean). Using this semantic segmentation dataset, a CNN training was performed with 169 videos, and a clean region (visualization scale (VS)) formula was developed. Then, measuring mean intersection over union (mIoU), Dice index, and clean mucosal predictions were performed. The VS performance was tested using 10 videos. : A total of 10,033 frames of the semantic segmentation dataset were constructed from 179 patients. The 3-class and 2-class semantic segmentation's testing performance was 0.7716 mIoU (range: 0.7031-0.8071), 0.8627 Dice index (range: 0.7846-0.8891), and 0.8927 mIoU (range: 0.8562-0.9330), 0.9457 Dice index (range: 0.9225-0.9654), respectively. In addition, the 3-class and 2-class clean mucosal prediction accuracy was 94.4% and 95.7%, respectively. The VS prediction performance for both 3-class and 2-class segmentation was almost identical to the ground truth. : We established a semantic segmentation dataset spanning 10 stages uniformly from 179 patients. The prediction accuracy for clean mucosa was significantly high (above 94%). Our VS equation can approximately measure the region of clean mucosa. These results confirmed our dataset to be ideal for an accurate and quantitative assessment of AI-based bowel cleanliness.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8954405 | PMC |
http://dx.doi.org/10.3390/medicina58030397 | DOI Listing |
Healthc Technol Lett
December 2024
Robotics and Control Laboratory, Department of Electrical and Computer Engineering The University of British Columbia Vancouver Canada.
The Segment Anything model (SAM) is a powerful vision foundation model that is revolutionizing the traditional paradigm of segmentation. Despite this, a reliance on prompting each frame and large computational cost limit its usage in robotically assisted surgery. Applications, such as augmented reality guidance, require little user intervention along with efficient inference to be usable clinically.
View Article and Find Full Text PDFSci Rep
January 2025
School of Computer Science, Hunan First Normal University, Changsha, 410205, China.
Retinal blood vessels are the only blood vessels in the human body that can be observed non-invasively. Changes in vessel morphology are closely associated with hypertension, diabetes, cardiovascular disease and other systemic diseases, and computers can help doctors identify these changes by automatically segmenting blood vessels in fundus images. If we train a highly accurate segmentation model on one dataset (source domain) and apply it to another dataset (target domain) with a different data distribution, the segmentation accuracy will drop sharply, which is called the domain shift problem.
View Article and Find Full Text PDFSci Data
January 2025
Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.
Pituitary neuroendocrine tumors remain one of the most common intracranial tumors. While radiomic research related to pituitary tumors is progressing, public data sets for external validation remain scarce. We introduce an open dataset comprising high-resolution T1 contrast-enhanced MR scans of 136 patients with pituitary tumors, annotated for tumor segmentation and accompanied by clinical, radiological and pathological metadata.
View Article and Find Full Text PDFUrology
January 2025
Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, 430060, China; Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, Hubei, 430060, China. Electronic address:
Objectives: To explore new metrics for assessing radical prostatectomy difficulty through a two-stage deep learning method from preoperative magnetic resonance imaging.
Methods: The procedure and metrics were validated through 290 patients consisting of laparoscopic and robot-assisted radical prostatectomy procedures from two real cohorts. The nnUNet_v2 adaptive model was trained to perform accurate segmentation of the prostate and pelvis.
Environ Pollut
January 2025
Institute for Risk Assessment Sciences, Utrecht University, Utrecht, The Netherlands.
Mobile air pollution measurements are typically aggregated by varying road segment lengths, grid cell sizes, and time intervals. How these spatiotemporal aggregation schemas affect the modeling performance of land use regression models has seldom been assessed. We used 5.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!