Aim: In this paper we proposed different architectures of convolutional neural network (CNN) to classify fatty liver disease in images using only pixels and diagnosis labels as input. We trained and validated our models using a dataset of 629 images consisting of 2 types of liver images, normal and liver steatosis.

Material And Methods: We assessed two pre-trained models of convolutional neural networks, Inception-v3 and VGG-16 using fine-tuning. Both models were pre-trained on ImageNet dataset to extract features from B-mode ultrasound liver images. The results obtained through these methods were compared for selecting the predictive model with the best performance metrics. We trained the two models using a dataset of 262 images of liver steatosis and 234 images of normal liver. We assessed the models using a dataset of 70 liver steatosis im-ages and 63 normal liver images.

Results: The proposed model that used Inception v3 obtained a 93.23% test accuracy with a sensitivity of 89.9%% and a precision of 96.6%, and areas under each receiver operating characteristic curves (ROC AUC) of 0.93. The other proposed model that used VGG-16, obtained a 90.77% test accuracy with a sensitivity of 88.9% and a precision of 92.85%, and areas under each receiver operating characteristic curves (ROC AUC) of 0.91.

Conclusion: The deep learning algorithms that we proposed to detect steatosis and classify the images in normal and fatty liver images, yields an excellent test performance of over 90%. However, future larger studies are required in order to establish how these algorithms can be implemented in a clinical setting.

Download full-text PDF

Source
http://dx.doi.org/10.11152/mu-2746DOI Listing

Publication Analysis

Top Keywords

convolutional neural
12
liver steatosis
12
models dataset
12
liver images
12
images normal
12
normal liver
12
liver
10
images
9
neural networks
8
fatty liver
8

Similar Publications

This paper systematically evaluates saliency methods as explainability tools for convolutional neural networks trained to diagnose glaucoma using simplified eye fundus images that contain only disc and cup outlines. These simplified images, a methodological novelty, were used to relate features highlighted in the saliency maps to the geometrical clues that experts consider in glaucoma diagnosis. Despite their simplicity, these images retained sufficient information for accurate classification, with balanced accuracies ranging from 0.

View Article and Find Full Text PDF

Semantical text understanding holds significant importance in natural language processing (NLP). Numerous datasets, such as Quora Question Pairs (QQP), have been devised for this purpose. In our previous study, we developed a Siamese Convolutional Neural Network (S-CNN) that achieved an F1 score of 82.

View Article and Find Full Text PDF

Significance: Optimal meibography utilization and interpretation are hindered due to poor lid presentation, blurry images, or image artifacts and the challenges of applying clinical grading scales. These results, using the largest image dataset analyzed to date, demonstrate development of algorithms that provide standardized, real-time inference that addresses all of these limitations.

Purpose: This study aimed to develop and validate an algorithmic pipeline to automate and standardize meibomian gland absence assessment and interpretation.

View Article and Find Full Text PDF

Introduction: A large number of middle-aged and elderly patients have an insufficient understanding of osteoporosis and its harm. This study aimed to establish and validate a convolutional neural network (CNN) model based on unenhanced chest computed tomography (CT) images of the vertebral body and skeletal muscle for opportunistic screening in patients with osteoporosis.

Materials And Methods: Our team retrospectively collected clinical information from participants who underwent unenhanced chest CT and dual-energy X-ray absorptiometry (DXA) examinations between January 1, 2022, and December 31, 2022, at four hospitals.

View Article and Find Full Text PDF

This paper investigates the potential of artificial intelligence (AI) and machine learning (ML) to enhance the differentiation of cystic lesions in the sellar region, such as pituitary adenomas, Rathke cleft cysts (RCCs) and craniopharyngiomas (CP), through the use of advanced neuroimaging techniques, particularly magnetic resonance imaging (MRI). The goal is to explore how AI-driven models, including convolutional neural networks (CNNs), deep learning, and ensemble methods, can overcome the limitations of traditional diagnostic approaches, providing more accurate and early differentiation of these lesions. The review incorporates findings from critical studies, such as using the Open Access Series of Imaging Studies (OASIS) dataset (Kaggle, San Francisco, USA) for MRI-based brain research, highlighting the significance of statistical rigor and automated segmentation in developing reliable AI models.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!