Minimal data poisoning attack in federated learning for medical image classification: An attacker perspective.

Artif Intell Med

Department of Faculty of Science, Mathematics and Computer Science, Informatics Institute, University of Amsterdam, 1090 GH Amsterdam, The Netherlands; Department of Biomedical Engineering and Physics, Amsterdam UMC, Amsterdam, The Netherlands. Electronic address:

Published: January 2025

The privacy-sensitive nature of medical image data is often bounded by strict data sharing regulations that necessitate the need for novel modeling and analysis techniques. Federated learning (FL) enables multiple medical institutions to collectively train a deep neural network without sharing sensitive patient information. In addition, FL uses its collaborative approach to address challenges related to the scarcity and non-uniform distribution of heterogeneous medical domain data. Nevertheless, the data-opaque nature and distributed setup make FL susceptible to data poisoning attacks. There are diverse FL data poisoning attacks for classification models on natural image data in the literature. But their primary focus is on the impact of the attack and they do not consider the attack budget and attack visibility. The attack budget is essential for adversaries to optimize resource utilization in real-world scenarios, which determines the number of manipulations or perturbations they can apply. Simultaneously, attack visibility is crucial to ensure covert execution, allowing attackers to achieve their objectives without triggering detection mechanisms. Generally, an attacker's aim is to create maximum attack impact with minimal resources and low visibility. So, considering these three entities can effectively comprehend the adversary's perspective in designing an attack for real-world scenarios. Further, data poisoning attacks on medical images are challenging compared to natural images due to the subjective nature of medical data. Hence, we develop an attack with a low budget, low visibility, and high impact for medical image classification in FL. We propose a federated learning attention guided minimal attack (FL-AGMA), that uses class attention maps to identify specific medical image regions for perturbation. We introduce image distortion degree (IDD) as a metric to assess the attack budget. Also, we develop a feedback mechanism to regulate the attack coefficient for low attack visibility. Later, we optimize the attack budget by adaptively changing the IDD based on attack visibility. We extensively evaluate three large-scale datasets, namely, Covid-chestxray, Camelyon17, and HAM10000, covering three different data modalities. We observe that our FL-AGMA method has resulted in 44.49% less test accuracy with only 24% of IDD attack budget and lower attack visibility compared to the other attacks.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.artmed.2024.103024DOI Listing

Publication Analysis

Top Keywords

attack budget
20
attack visibility
20
attack
17
data poisoning
16
medical image
16
federated learning
12
poisoning attacks
12
data
9
medical
8
image classification
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!