Understanding and preserving the deep sea ecosystems is paramount for marine conservation efforts. Automated object (deep-sea biota) classification can enable the creation of detailed habitat maps that not only aid in biodiversity assessments but also provide essential data to evaluate ecosystem health and resilience. Having a significant source of labelled data helps prevent overfitting and enables training deep learning models with numerous parameters. In this paper, we contribute to the establishment of a significant deep-sea remotely operated vehicle (ROV) image classification dataset with 3994 images featuring deep-sea biota belonging to 33 classes. We manually label the images through rigorous quality control with human-in-the-loop image labelling. Leveraging data from ROV equipped with advanced imaging systems, our study provides results using novel deep-learning models for image classification. We use deep learning models including ResNet, DenseNet, Inception, and Inception-ResNet to benchmark the dataset that features class imbalance with many classes. Our results show that the Inception-ResNet model provides a mean classification accuracy of 65%, with AUC scores exceeding 0.8 for each class.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11372175 | PMC |
http://dx.doi.org/10.1038/s41597-024-03766-3 | DOI Listing |
Int J Comput Assist Radiol Surg
January 2025
Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
Purpose: Semantic segmentation and landmark detection are fundamental tasks of medical image processing, facilitating further analysis of anatomical objects. Although deep learning-based pixel-wise classification has set a new-state-of-the-art for segmentation, it falls short in landmark detection, a strength of shape-based approaches.
Methods: In this work, we propose a dense image-to-shape representation that enables the joint learning of landmarks and semantic segmentation by employing a fully convolutional architecture.
Neurosurg Rev
January 2025
Department of Neurosurgery, Mount Sinai Hospital, Icahn School of Medicine, New York City, NY, USA.
Currently, the World Health Organization (WHO) grade of meningiomas is determined based on the biopsy results. Therefore, accurate non-invasive preoperative grading could significantly improve treatment planning and patient outcomes. Considering recent advances in machine learning (ML) and deep learning (DL), this meta-analysis aimed to evaluate the performance of these models in predicting the WHO meningioma grade using imaging data.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Department of Anesthesiology, E-Da Cancer Hospital, I-Shou University, Kaohsiung, Taiwan.
Parkinson's disease (PD), a degenerative disorder of the central nervous system, is commonly diagnosed using functional medical imaging techniques such as single-photon emission computed tomography (SPECT). In this study, we utilized two SPECT data sets (n = 634 and n = 202) from different hospitals to develop a model capable of accurately predicting PD stages, a multiclass classification task. We used the entire three-dimensional (3D) brain images as input and experimented with various model architectures.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Computer Science Department, University of Geneva, Geneva, Switzerland.
Accurate wound segmentation is crucial for the precise diagnosis and treatment of various skin conditions through image analysis. In this paper, we introduce a novel dual attention U-Net model designed for precise wound segmentation. Our proposed architecture integrates two widely used deep learning models, VGG16 and U-Net, incorporating dual attention mechanisms to focus on relevant regions within the wound area.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!