Abdominal imaging of autosomal dominant polycystic kidney disease (ADPKD) has historically focused on detecting complications such as cyst rupture, cyst infection, obstructing renal calculi, and pyelonephritis; discriminating complex cysts from renal cell carcinoma; and identifying sources of abdominal pain. Many imaging features of ADPKD are incompletely evaluated or not deemed to be clinically significant, and because of this, treatment options are limited. However, total kidney volume (TKV) measurement has become important for assessing the risk of disease progression (i.e., Mayo Imaging Classification) and predicting tolvaptan treatment's efficacy. Deep learning for segmenting the kidneys has improved these measurements' speed, accuracy, and reproducibility. Deep learning models can also segment other organs and tissues, extracting additional biomarkers to characterize the extent to which extrarenal manifestations complicate ADPKD. In this concept paper, we demonstrate how deep learning may be applied to measure the TKV and how it can be extended to measure additional features of this disease.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11118119 | PMC |
http://dx.doi.org/10.3390/biomedicines12051133 | DOI Listing |
Sci Rep
December 2024
KAUST Center of Excellence for Smart Health (KCSH), King Abdullah University of Science and Technology, Thuwal, 23955, Saudi Arabia.
Analyzing microbial samples remains computationally challenging due to their diversity and complexity. The lack of robust de novo protein function prediction methods exacerbates the difficulty in deriving functional insights from these samples. Traditional prediction methods, dependent on homology and sequence similarity, often fail to predict functions for novel proteins and proteins without known homologs.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Informatics, University of Hamburg, Hamburg, Germany.
Central to the development of universal learning systems is the ability to solve multiple tasks without retraining from scratch when new data arrives. This is crucial because each task requires significant training time. Addressing the problem of continual learning necessitates various methods due to the complexity of the problem space.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Computer Science, Birzeit University, P.O. Box 14, Birzeit, West Bank, Palestine.
Accurate classification of logos is a challenging task in image recognition due to variations in logo size, orientation, and background complexity. Deep learning models, such as VGG16, have demonstrated promising results in handling such tasks. However, their performance is highly dependent on optimal hyperparameter settings, whose fine-tuning is both labor-intensive and time-consuming.
View Article and Find Full Text PDFSci Rep
December 2024
Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Canada.
Accurate diagnosis of oral lesions, early indicators of oral cancer, is a complex clinical challenge. Recent advances in deep learning have demonstrated potential in supporting clinical decisions. This paper introduces a deep learning model for classifying oral lesions, focusing on accuracy, interpretability, and reducing dataset bias.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Civil Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
Deep learning models are widely used for traffic forecasting on freeways due to their ability to learn complex temporal and spatial relationships. In particular, graph neural networks, which integrate graph theory into deep learning, have become popular for modeling traffic sensor networks. However, traditional graph convolutional networks (GCNs) face limitations in capturing long-range spatial correlations, which can hinder accurate long-term predictions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!