Background: Renal cancer is one of the leading causes of cancer-related deaths worldwide, and early detection of renal cancer can significantly improve the patients' survival rate. However, the manual analysis of renal tissue in the current clinical practices is labor-intensive, prone to inter-pathologist variations and easy to miss the important cancer markers, especially in the early stage.
Methods: In this work, we developed deep convolutional neural network (CNN) based heterogeneous ensemble models for automated analysis of renal histopathological images without detailed annotations. The proposed method would first segment the histopathological tissue into patches with different magnification factors, then classify the generated patches into normal and tumor tissues using the pre-trained CNNs and lastly perform the deep ensemble learning to determine the final classification. The heterogeneous ensemble models consisted of CNN models from five deep learning architectures, namely VGG, ResNet, DenseNet, MobileNet, and EfficientNet. These CNN models were fine-tuned and used as base learners, they exhibited different performances and had great diversity in histopathological image analysis. The CNN models with superior classification accuracy (Acc) were then selected to undergo ensemble learning for the final classification. The performance of the investigated ensemble approaches was evaluated against the state-of-the-art literature.
Results: The performance evaluation demonstrated the superiority of the proposed best performing ensembled model: five-CNN based weighted averaging model, with an Acc (99%), specificity (Sp) (98%), F1-score (F1) (99%) and area under the receiver operating characteristic (ROC) curve (98%) but slightly inferior recall (Re) (99%) compared to the literature.
Conclusions: The outstanding robustness of the developed ensemble model with a superiorly high-performance scores in the evaluated metrics suggested its reliability as a diagnosis system for assisting the pathologists in analyzing the renal histopathological tissues. It is expected that the proposed ensemble deep CNN models can greatly improve the early detection of renal cancer by making the diagnosis process more efficient, and less misdetection and misdiagnosis; subsequently, leading to higher patients' survival rate.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10498232 | PMC |
http://dx.doi.org/10.21037/qims-23-46 | DOI Listing |
Data Brief
February 2025
Tashkent institute of textile and light industry, 5, Shoxdjaxon str., Tashkent city 100100, Uzbekistan.
In this study, the authors presented a dataset for named entity recognition in the Uzbek language. The dataset consists of 2000 sentences and 25,865 words, and the sources were legal documents and hand-crafted sentences annotated using the BIOES scheme. The study is complemented by the fact that the authors demonstrated the applications of the created dataset by training a language model using the CNN + LSTM architecture, which achieves high accuracy in NER tasks, with an F1 score of 90.
View Article and Find Full Text PDFCommun Med (Lond)
January 2025
Child and Adolescent Psychiatry and Psychotherapy, University Medical Center Göttingen, Leibniz ScienceCampus Primate Cognition and German Center for Child and Adolescent Health (DZKJ), Göttingen, Germany.
Background: To assess the integrity of the developing nervous system, the Prechtl general movement assessment (GMA) is recognized for its clinical value in diagnosing neurological impairments in early infancy. GMA has been increasingly augmented through machine learning approaches intending to scale-up its application, circumvent costs in the training of human assessors and further standardize classification of spontaneous motor patterns. Available deep learning tools, all of which are based on single sensor modalities, are however still considerably inferior to that of well-trained human assessors.
View Article and Find Full Text PDFClin Neuroradiol
January 2025
Department of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Düsseldorf, Heinrich-Heine-University Düsseldorf, Moorenstraße 5, 40225, Düsseldorf, Germany.
Introduction: Ventriculoperitoneal shunts (VPS) are an essential part of the treatment of hydrocephalus, with numerous valve models available with different ways of indicating pressure levels. The model types often need to be identified on X‑rays to assess pressure levels using a matching template. Artificial intelligence (AI), in particular deep learning, is ideally suited to automate repetitive tasks such as identifying different VPS valve models.
View Article and Find Full Text PDFNPJ Biofilms Microbiomes
January 2025
Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences, Leninsky Prospekt 47, Moscow, 119991, Russia.
Biofilms are critical for understanding environmental processes, developing biotechnology applications, and progressing in medical treatments of various infections. Nowadays, a key limiting factor for biofilm analysis is the difficulty in obtaining large datasets with fully annotated images. This study introduces a versatile approach for creating synthetic datasets of annotated biofilm images with employing deep generative modeling techniques, including VAEs, GANs, diffusion models, and CycleGAN.
View Article and Find Full Text PDFEnviron Monit Assess
January 2025
Department of Environmental Management, Graduate School of Agriculture, Kindai University, Nara, Japan.
Efficient agricultural management often relies on farmers' experiential knowledge and demands considerable labor, particularly in regions with challenging terrains. To reduce these burdens, the adoption of smart technologies has garnered increasing attention. This study proposes a convolutional neural network (CNN)-based model as a decision-support tool for smart irrigation in orchard systems, focusing on persimmon cultivation in mountainous regions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!