The origin version of FederatedScope |
||
|---|---|---|
| .. | ||
| README.md | ||
README.md
Attacks in FL
Here we present recent works on the attacks in FL.
|Survey|Privacy Attacks|Backdoor Attacks|Untargeted Attacks|
Survey
| Title | Venue | Link | Year |
|---|---|---|---|
| A Survey on Gradient Inversion: Attacks, Defenses and Future Directions | arxiv | 2022 | |
| Threats to Federated Learning: A Survey | arxiv | 2020 |
Privacy Attacks in FL
2022
| Title | Venue | Link |
|---|---|---|
| Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models | ICLR | |
| Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification | Arxiv | |
| Bayesian Framework for Gradient Leakage | ||
| Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage | CVPR |
2021
| Title | Venue | Link |
|---|---|---|
| See through gradients: Image batch recovery via gradinversion | CVPR | |
| Gradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix | ICML | |
| Evaluating gradient inversion attacks and defenses in federated learning | NeurIPS | |
| Bayesian framework for gradient leakage | ICLR | |
| Catastrophic data leakage in vertical federated learning. | NeurIPS | |
| Gradient inversion with generative image prior | NeurIPS | |
| R-gap: Recursive gradient attack on privacy | ICLR | |
| Understanding training-data leakage from gradients in neural networks for image classifications. | NeurIPS workshop | |
| Soteria: Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective | CVPR | |
| Reconstruction Attack on Instance Encoding for Language Understanding | EMNLP | |
| Source Inference Attacks in Federated Learning | ICDM | |
| TAG: Gradient Attack on Transformer-based Language Models | EMNLP(findings) | |
| Unleashing the Tiger: Inference Attacks on Split Learning | CCS |
2020
| Title | Venue | Link |
|---|---|---|
| idlg: Improved deep leakage from gradients | arxiv | |
| A framework for evaluating client privacy leakages in federated learning | ESORICS | |
| Inverting gradients – how easy is it to break privacy in federated learning? | NeurIPS | |
| Sapag: A self adaptive privacy attack from gradients | arxiv | |
| Is Private Learning Possible with Instance Encoding? | S&P |
2019
| Title | Venue | Link |
|---|---|---|
| Deep Leakage from Gradients | NeurIPS | |
| Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning | Infocom | |
| Exploiting Unintended Feature Leakage in Collaborative Learning | S&P | |
| Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning | S&P |
2017
| Title | Venue | Link |
|---|---|---|
| Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning | CCS |
Backdoor Attacks in FL
2022
| Title | Venue | Link |
|---|---|---|
| Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information | arxiv | |
| Neurotoxin: Durable Backdoors in Federated Learning | arxiv |
2021
| Title | Venue | Link |
|---|---|---|
| WaNet - Imperceptible Warping-based Backdoor Attack | ICLR |
2020
| Title | Venue | Link |
|---|---|---|
| Attack of the Tails: Yes, You Really Can Backdoor Federated Learning | NeurIPS | |
| DBA: Distributed Backdoor Attacks against Federated Learning | ICLR | |
| How To Backdoor Federated Learning | AISTATS |
2019
| Title | Venue | Link |
|---|---|---|
| BadNets: Evaluating Backdooring Attacks on Deep Neural Networks | IEEE Access | |
| Analyzing Federated Learning through an Adversarial Lens | NeurIPS |
2017
| Title | Venue | Link |
|---|---|---|
| Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning | arxiv |
Untargeted Attacks in FL
2022
| Title | Venue | Link |
|---|---|---|
| Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework | NeurIPS | |
| Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios | IJCAI |
2021
| Title | Venue | Link |
|---|---|---|
| Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning | NDSS |
2020
| Title | Venue | Link |
|---|---|---|
| Local Model Poisoning Attacks to Byzantine-Robust Federated Learning | USENIX SCEURITY | |
| Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation | UAI |