AI Article Synopsis

Article Abstract

Federated learning (FL) enables collaborative training of a machine learning (ML) model across multiple parties, facilitating the preservation of users' and institutions' privacy by maintaining data stored locally. Instead of centralizing raw data, FL exchanges locally refined model parameters to build a global model incrementally. While FL is more compliant with emerging regulations such as the European General Data Protection Regulation (GDPR), ensuring the right to be forgotten in this context-allowing FL participants to remove their data contributions from the learned model-remains unclear. In addition, it is recognized that malicious clients may inject backdoors into the global model through updates, e.g., to generate mispredictions on specially crafted data examples. Consequently, there is the need for mechanisms that can guarantee individuals the possibility to remove their data and erase malicious contributions even after aggregation, without compromising the already acquired "good" knowledge. This highlights the necessity for novel federated unlearning (FU) algorithms, which can efficiently remove specific clients' contributions without full model retraining. This article provides background concepts, empirical evidence, and practical guidelines to design/implement efficient FU schemes. This study includes a detailed analysis of the metrics for evaluating unlearning in FL and presents an in-depth literature review categorizing state-of-the-art FU contributions under a novel taxonomy. Finally, we outline the most relevant and still open technical challenges, by identifying the most promising research directions in the field.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2024.3478334DOI Listing

Publication Analysis

Top Keywords

federated unlearning
8
global model
8
remove data
8
data
6
model
5
unlearning survey
4
survey methods
4
methods design
4
design guidelines
4
guidelines evaluation
4

Similar Publications

Federated learning (FL) enables collaborative training of a machine learning (ML) model across multiple parties, facilitating the preservation of users' and institutions' privacy by maintaining data stored locally. Instead of centralizing raw data, FL exchanges locally refined model parameters to build a global model incrementally. While FL is more compliant with emerging regulations such as the European General Data Protection Regulation (GDPR), ensuring the right to be forgotten in this context-allowing FL participants to remove their data contributions from the learned model-remains unclear.

View Article and Find Full Text PDF

Purpose: Distributed learning is widely used to comply with data-sharing regulations and access diverse datasets for training machine learning (ML) models. The traveling model (TM) is a distributed learning approach that sequentially trains with data from one center at a time, which is especially advantageous when dealing with limited local datasets. However, a critical concern emerges when centers utilize different scanners for data acquisition, which could potentially lead models to exploit these differences as shortcuts.

View Article and Find Full Text PDF

Addressing unreliable local models in federated learning through unlearning.

Neural Netw

December 2024

Department of Computer Engineering, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia. Electronic address:

Federated unlearning (FUL) is a promising solution for removing negative influences from the global model. However, ensuring the reliability of local models in FL systems remains challenging. Existing FUL studies mainly focus on eliminating bad data influences and neglecting scenarios where other factors, such as adversarial attacks and communication constraints, also contribute to negative influences that require mitigation.

View Article and Find Full Text PDF

Distinct engrams control fear and extinction memory.

Hippocampus

May 2024

Laboratório de Neurobiologia da Memória, Departamento de Biofísica, Instituto de Biociências, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil.

Article Synopsis
  • Memories are stored in specific cells called engram cells, which are essential for recalling memories and can undergo processes like reconsolidation (updating the original memory) or extinction (forming a new memory to suppress the original one).
  • This study explores how memory recall and extinction work by targeting active neurons in the brain, specifically in the basolateral amygdala (BLA) and infralimbic (IL) cortex, to see if new memory traces are formed or if original memories are modified.
  • Findings suggest that while the BLA engram is crucial for memory processes, the IL cortex is key for extinction, indicating that the extinction process relies on creating a new memory rather than just altering the original memory trace.
View Article and Find Full Text PDF

Background: The study investigated whether three deep-learning models, namely, the CNN_model (trained from scratch), the TL_model (transfer learning), and the FT_model (fine-tuning), could predict the early response of brain metastases (BM) to radiosurgery using a minimal pre-processing of the MRI images. The dataset consisted of 19 BM patients who underwent stereotactic-radiosurgery (SRS) within 3 months. The images used included axial fluid-attenuated inversion recovery (FLAIR) sequences and high-resolution contrast-enhanced T1-weighted (CE T1w) sequences from the tumor center.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!