Distributed machine learning (ML) was originally introduced to solve a complex ML problem in a parallel way for more efficient usage of computation resources. In recent years, such learning has been extended to satisfy other objectives, namely, performing learning in situ on the training data at multiple locations and keeping the training datasets private while still allowing sharing of the model. However, these objectives have led to considerable research on the vulnerabilities of distributed learning both in terms of privacy concerns of the training data and the robustness of the learned overall model due to bad or maliciously crafted training data. This article provides a comprehensive survey of various privacy, security, and robustness issues in distributed ML.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2022.3169347DOI Listing

Publication Analysis

Top Keywords

training data
12
machine learning
8
learning
5
collaborative machine
4
learning schemes
4
schemes robustness
4
robustness privacy
4
privacy distributed
4
distributed machine
4
learning originally
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!