FS-TFP/materials/paper_list/FL-Attacker/README.md

7.5 KiB
Raw Blame History

Attacks in FL

Here we present recent works on the attacks in FL.

|Survey|Privacy Attacks|Backdoor Attacks|Untargeted Attacks|

Survey

Title Venue Link Year
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions arxiv pdf 2022
Threats to Federated Learning: A Survey arxiv pdf 2020

Privacy Attacks in FL

2022

Title Venue Link
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models ICLR pdf
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification Arxiv pdf
Bayesian Framework for Gradient Leakage pdf
Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage CVPR pdf

2021

Title Venue Link
See through gradients: Image batch recovery via gradinversion CVPR pdf
Gradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix ICML pdf
Evaluating gradient inversion attacks and defenses in federated learning NeurIPS pdf
Bayesian framework for gradient leakage ICLR pdf
Catastrophic data leakage in vertical federated learning. NeurIPS pdf
Gradient inversion with generative image prior NeurIPS pdf
R-gap: Recursive gradient attack on privacy ICLR pdf
Understanding training-data leakage from gradients in neural networks for image classifications. NeurIPS workshop pdf
Soteria: Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective CVPR pdf
Reconstruction Attack on Instance Encoding for Language Understanding EMNLP pdf
Source Inference Attacks in Federated Learning ICDM pdf
TAG: Gradient Attack on Transformer-based Language Models EMNLP(findings) pdf
Unleashing the Tiger: Inference Attacks on Split Learning CCS pdf

2020

Title Venue Link
idlg: Improved deep leakage from gradients arxiv pdf
A framework for evaluating client privacy leakages in federated learning ESORICS pdf
Inverting gradients how easy is it to break privacy in federated learning? NeurIPS pdf
Sapag: A self adaptive privacy attack from gradients arxiv pdf
Is Private Learning Possible with Instance Encoding? S&P pdf

2019

Title Venue Link
Deep Leakage from Gradients NeurIPS pdf
Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning Infocom pdf
Exploiting Unintended Feature Leakage in Collaborative Learning S&P pdf
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning S&P pdf

2017

Title Venue Link
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning CCS pdf

Backdoor Attacks in FL

2022

Title Venue Link
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information arxiv pdf
Neurotoxin: Durable Backdoors in Federated Learning arxiv pdf

2021

Title Venue Link
WaNet - Imperceptible Warping-based Backdoor Attack ICLR pdf

2020

Title Venue Link
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning NeurIPS pdf
DBA: Distributed Backdoor Attacks against Federated Learning ICLR pdf
How To Backdoor Federated Learning AISTATS pdf

2019

Title Venue Link
BadNets: Evaluating Backdooring Attacks on Deep Neural Networks IEEE Access pdf
Analyzing Federated Learning through an Adversarial Lens NeurIPS pdf

2017

Title Venue Link
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning arxiv pdf

Untargeted Attacks in FL

2022

Title Venue Link
Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework NeurIPS pdf
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios IJCAI pdf

2021

Title Venue Link
Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning NDSS pdf

2020

Title Venue Link
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning USENIX SCEURITY pdf
Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation UAI pdf