Hepatocellular carcinoma (HCC), the most common type of liver cancer, poses significant challenges in detection and diagnosis. Medical imaging, especially computed tomography (CT), is pivotal in non-invasively identifying this disease, requiring substantial expertise for interpretation. This research introduces an innovative strategy that integrates two-dimensional (2D) and three-dimensional (3D) deep learning models within a federated learning (FL) framework for precise segmentation of liver and tumor regions in medical images. The study utilized 131 CT scans from the Liver Tumor Segmentation (LiTS) challenge and demonstrated the superior efficiency and accuracy of the proposed Hybrid-ResUNet model with a Dice score of 0.9433 and an AUC of 0.9965 compared to ResNet and EfficientNet models. This FL approach is beneficial for conducting large-scale clinical trials while safeguarding patient privacy across healthcare settings. It facilitates active engagement in problem-solving, data collection, model development, and refinement. The study also addresses data imbalances in the FL context, showing resilience and highlighting local models' robust performance. Future research will concentrate on refining federated learning algorithms and their incorporation into the continuous implementation and deployment (CI/CD) processes in AI system operations, emphasizing the dynamic involvement of clients. We recommend a collaborative human-AI endeavor to enhance feature extraction and knowledge transfer. These improvements are intended to boost equitable and efficient data collaboration across various sectors in practical scenarios, offering a crucial guide for forthcoming research in medical AI.

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2024.3400599DOI Listing

Publication Analysis

Top Keywords

federated learning
12
liver tumor
8
precision robust
4
robust models
4
models healthcare
4
healthcare institution
4
institution federated
4
learning
4
learning predicting
4
predicting hcc
4

Similar Publications

Digital transformation has significantly impacted public procurement, improving operational efficiency, transparency, and competition. This transformation has allowed the automation of data analysis and oversight in public administration. Public procurement involves various stages and generates a multitude of documents.

View Article and Find Full Text PDF

Deep learning MRI models for the differential diagnosis of tumefactive demyelination versus -wildtype glioblastoma.

AJNR Am J Neuroradiol

January 2025

From the Department of Radiology (GMC, MM, YN, BJE), Department of Quantitative Health Sciences (PAD, MLK, JEEP), Department of Neurology (CBM, JAS, MWR, FSG, HKP, DHL, WOT), Department of Neurosurgery (TCB), Department of Laboratory Medicine and Pathology (RBJ), and Center for Multiple Sclerosis and Autoimmune Neurology (WOT), Mayo Clinic, Rochester, MN, USA; Dell Medical School (MFE), University of Texas, Austin, TX, USA.

Background And Purpose: Diagnosis of tumefactive demyelination can be challenging. The diagnosis of indeterminate brain lesions on MRI often requires tissue confirmation via brain biopsy. Noninvasive methods for accurate diagnosis of tumor and non-tumor etiologies allows for tailored therapy, optimal tumor control, and a reduced risk of iatrogenic morbidity and mortality.

View Article and Find Full Text PDF

Objectives: The efficacy of monotherapy in alleviating psychological disorders like anxiety and depression among breast cancer patients is suboptimal, necessitating effective psychosocial interventions. Mindfulness-based interventions have been shown to mitigate anxiety-depression symptoms and encourage beneficial behaviors. The online mindfulness-based cancer recovery (MBCR) offers flexibility and guides practice across various settings, facilitating full patient engagement.

View Article and Find Full Text PDF

Diabetic retinopathy (DR) is a serious diabetes complication that can lead to vision loss, making timely identification crucial. Existing data-driven algorithms for DR staging from digital fundus images (DFIs) often struggle with generalization due to distribution shifts between training and target domains. To address this, DRStageNet, a deep learning model, was developed using six public and independent datasets with 91,984 DFIs from diverse demographics.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!