In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is [Formula: see text] or higher, instead of [Formula: see text] , which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2024.3436149DOI Listing

Publication Analysis

Top Keywords

byzantine attacks
24
distributional shifts
16
distributed learning
12
shifts byzantine
12
learning distributional
8
byzantine
8
nbs robust
8
robust aggregation
8
[formula text]
8
attacks
6

Similar Publications

This paper investigates the behavior of fractional-order nonlinear multi-agent systems subjected to Byzantine assaults, specifically focusing on the manipulations of both sensors and actuators. We employ weighted graphs, both directed and undirected, to illustrate the system's topology. Our methodology combines algebraic graph theory with fractional-order Lyapunov techniques to develop algebraic requirements for leader-following consensus, providing a robust framework for analyzing consensus dynamics in these complex systems.

View Article and Find Full Text PDF

This article studies the problem of the resilient cruise control in heterogeneous vehicle platoons against f -local Byzantine attacks (BAs). Agents under BAs become traitors of the swarm, who try to mislead its neighbors while adopting wrong inputs. Thus, BAs are extremely challenging to be suppressed.

View Article and Find Full Text PDF

In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result.

View Article and Find Full Text PDF

Recommendation system (RS) is an important information filtering tool in nowadays digital era. With the growing concern on privacy, deploying RSs in a federated learning (FL) manner emerges as a promising solution, which can train a high-quality model on the premise that the server does not directly access sensitive user data. Nevertheless, some malicious clients can deduce user data by analyzing the uploaded model parameters.

View Article and Find Full Text PDF

As an emerging decentralized machine learning technique, federated learning organizes collaborative training and preserves the privacy and security of participants. However, untrustworthy devices, typically Byzantine attackers, pose a significant challenge to federated learning since they can upload malicious parameters to corrupt the global model. To defend against such attacks, we propose a novel robust aggregation method-maximum correntropy aggregation (MCA), which applies the maximum correntropy criterion (MCC) to derive a central value from parameters.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!