Deepdive: Leveraging Pre-trained Deep Learning for Deep-Sea ROV Biota Identification in the Great Barrier Reef.

Sci Data

ITTC ARC Centre for Data Analytics for Resources and Environment, Biomedical Building, University of Sydney, New South Wales, Australia.

Published: September 2024

Understanding and preserving the deep sea ecosystems is paramount for marine conservation efforts. Automated object (deep-sea biota) classification can enable the creation of detailed habitat maps that not only aid in biodiversity assessments but also provide essential data to evaluate ecosystem health and resilience. Having a significant source of labelled data helps prevent overfitting and enables training deep learning models with numerous parameters. In this paper, we contribute to the establishment of a significant deep-sea remotely operated vehicle (ROV) image classification dataset with 3994 images featuring deep-sea biota belonging to 33 classes. We manually label the images through rigorous quality control with human-in-the-loop image labelling. Leveraging data from ROV equipped with advanced imaging systems, our study provides results using novel deep-learning models for image classification. We use deep learning models including ResNet, DenseNet, Inception, and Inception-ResNet to benchmark the dataset that features class imbalance with many classes. Our results show that the Inception-ResNet model provides a mean classification accuracy of 65%, with AUC scores exceeding 0.8 for each class.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11372175PMC
http://dx.doi.org/10.1038/s41597-024-03766-3DOI Listing

Publication Analysis

Top Keywords

deep learning
12
deep-sea biota
8
learning models
8
image classification
8
deepdive leveraging
4
leveraging pre-trained
4
deep
4
pre-trained deep
4
deep-sea
4
learning deep-sea
4

Similar Publications

Purpose: Semantic segmentation and landmark detection are fundamental tasks of medical image processing, facilitating further analysis of anatomical objects. Although deep learning-based pixel-wise classification has set a new-state-of-the-art for segmentation, it falls short in landmark detection, a strength of shape-based approaches.

Methods: In this work, we propose a dense image-to-shape representation that enables the joint learning of landmarks and semantic segmentation by employing a fully convolutional architecture.

View Article and Find Full Text PDF

Currently, the World Health Organization (WHO) grade of meningiomas is determined based on the biopsy results. Therefore, accurate non-invasive preoperative grading could significantly improve treatment planning and patient outcomes. Considering recent advances in machine learning (ML) and deep learning (DL), this meta-analysis aimed to evaluate the performance of these models in predicting the WHO meningioma grade using imaging data.

View Article and Find Full Text PDF

Parkinson's disease (PD), a degenerative disorder of the central nervous system, is commonly diagnosed using functional medical imaging techniques such as single-photon emission computed tomography (SPECT). In this study, we utilized two SPECT data sets (n = 634 and n = 202) from different hospitals to develop a model capable of accurately predicting PD stages, a multiclass classification task. We used the entire three-dimensional (3D) brain images as input and experimented with various model architectures.

View Article and Find Full Text PDF

Accurate wound segmentation is crucial for the precise diagnosis and treatment of various skin conditions through image analysis. In this paper, we introduce a novel dual attention U-Net model designed for precise wound segmentation. Our proposed architecture integrates two widely used deep learning models, VGG16 and U-Net, incorporating dual attention mechanisms to focus on relevant regions within the wound area.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!