Machine learning (ML) in healthcare data analytics is attracting much attention because of the unprecedented power of ML to extract knowledge that improves the decision-making process. At the same time, laws and ethics codes drafted by countries to govern healthcare data are becoming stringent. Although healthcare practitioners are struggling with an enforced governance framework, we see the emergence of distributed learning-based frameworks disrupting traditional-ML-model development. Splitfed learning (SFL) is one of the recent developments in distributed machine learning that empowers healthcare practitioners to preserve the privacy of input data and enables them to train ML models. However, SFL has some extra communication and computation overheads at the client side due to the requirement of client-side model synchronization. For a resource-constrained client side (hospitals with limited computational powers), removing such conditions is required to gain efficiency in the learning. In this regard, this paper studies SFL without client-side model synchronization. The resulting architecture is known as multi-head split learning (MHSL). At the same time, it is important to investigate information leakage, which indicates how much information is gained by the server related to the raw data directly out of the smashed data-the output of the client-side model portion-passed to it by the client. Our empirical studies examine the Resnet-18 and Conv1-D architecture model on the ECG and HAM-10000 datasets under IID data distribution. The results find that SFL provides 1.81% and 2.36% better accuracy than MHSL on the ECG and HAM-10000 datasets, respectively (for cut-layer value set to 1). Analysis of experimentation with various client-side model portions demonstrates that it has an impact on the overall performance. With an increase in layers in the client-side model portion, SFL performance improves while MHSL performance degrades. Experiment results also demonstrate that information leakage provided by mutual information score values in SFL is more than MHSL for ECG and HAM-10000 datasets by 2×10-5 and 4×10-3, respectively.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9326525PMC
http://dx.doi.org/10.3390/mps5040060DOI Listing

Publication Analysis

Top Keywords

client-side model
20
healthcare data
12
ecg ham-10000
12
ham-10000 datasets
12
splitfed learning
8
multi-head split
8
split learning
8
learning healthcare
8
machine learning
8
healthcare practitioners
8

Similar Publications

FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning.

Neural Netw

December 2024

Department of Computer Science and Engineering, Kyung Hee University, Yongin-si, 17104, Republic of Korea. Electronic address:

Federated learning (FL) enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works have focused on designing FL systems for unimodal data, limiting their potential to exploit valuable multimodal data for future personalized applications. Moreover, the majority of FL approaches still rely on labeled data at the client side, which is often constrained by the inability of users to self-annotate their data in real-world applications.

View Article and Find Full Text PDF

High security and privacy protection model for STI/HIV risk prediction.

Digit Health

November 2024

School of Mathematics, Physics and Computing, Centre for Health Research, University of Southern Queensland, Toowoomba Campus, QLD, Australia.

Article Synopsis
  • Applying artificial intelligence in healthcare, especially for STI and HIV prediction, is crucial but requires protecting sensitive data through advanced methods.
  • The study utilized federated learning and homomorphic encryption on a large dataset from eight countries, training models without compromising privacy.
  • Results showed significant performance improvement in predicting risk, with AUC rising from 0.78 to 0.94, indicating the effectiveness of decentralized data analysis in enhancing healthcare outcomes.
View Article and Find Full Text PDF

According to the World Health Organization (WHO), pneumonia kills about 2 million children under the age of 5 every year. Traditional machine learning methods can be used to diagnose chest X-rays of pneumonia in children, but there is a privacy and security issue in centralizing the data for training. Federated learning prevents data privacy leakage by sharing only the model and not the data, and it has a wide range of application in the medical field.

View Article and Find Full Text PDF
Article Synopsis
  • Contemporary domain generalization methods help improve medical image diagnosis using multi-source data through joint optimization, but data privacy issues make centralized training difficult.
  • Existing federated domain generalization methods struggle to balance strict privacy protection and good generalization on out-of-distribution data.
  • The proposed Bilateral Proxy Framework (BPF) enhances communication privacy and model stability using client-side and server-side proxies, leading to better generalization and performance in medical image diagnosis tasks compared to state-of-the-art methods.
View Article and Find Full Text PDF

Addressing unreliable local models in federated learning through unlearning.

Neural Netw

December 2024

Department of Computer Engineering, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia. Electronic address:

Federated unlearning (FUL) is a promising solution for removing negative influences from the global model. However, ensuring the reliability of local models in FL systems remains challenging. Existing FUL studies mainly focus on eliminating bad data influences and neglecting scenarios where other factors, such as adversarial attacks and communication constraints, also contribute to negative influences that require mitigation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!