FS-TFP/materials/paper_list/FL-Attacker/README.md

119 lines
7.5 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Attacks in FL
Here we present recent works on the attacks in FL.
|[Survey](#survey)|[Privacy Attacks](#privacy-attacks-in-fl)|[Backdoor Attacks](#backdoor-attacks-in-fl)|[Untargeted Attacks](#untargeted-attacks-in-fl)|
## Survey
| Title | Venue | Link | Year
| ------------------------------------------------------------ | ---------- |---------------------------------------------|-----------|
| A Survey on Gradient Inversion: Attacks, Defenses and Future Directions | arxiv | [pdf](https://arxiv.org/pdf/2206.07284.pdf) | 2022 |
| Threats to Federated Learning: A Survey | arxiv| [pdf](https://arxiv.org/pdf/2003.02133.pdf) | 2020 |
## Privacy Attacks in FL
## 2022
| Title | Venue | Link |
| ------------------------------------------------------------ | ---------- |---------------------------------------------|
| Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models| ICLR | [pdf](https://openreview.net/pdf?id=fwzUgo0FM9v) |
|Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification |Arxiv | [pdf](https://arxiv.org/pdf/2202.00580.pdf)|
|Bayesian Framework for Gradient Leakage|[pdf](https://openreview.net/pdf?id=f2lrIbGx3x7)|
|Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage|CVPR|[pdf](https://openaccess.thecvf.com/content/CVPR2022/papers/Li_Auditing_Privacy_Defenses_in_Federated_Learning_via_Generative_Gradient_Leakage_CVPR_2022_paper.pdf)|
## 2021
| Title | Venue | Link |
| --- | --- | --- |
| See through gradients: Image batch recovery via gradinversion | CVPR | [pdf](https://openaccess.thecvf.com/content/CVPR2021/papers/Yin_See_Through_Gradients_Image_Batch_Recovery_via_GradInversion_CVPR_2021_paper.pdf) |
| Gradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix | ICML | [pdf](http://proceedings.mlr.press/v139/lam21b/lam21b.pdf) |
| Evaluating gradient inversion attacks and defenses in federated learning|NeurIPS|[pdf](https://proceedings.neurips.cc/paper/2021/file/3b3fff6463464959dcd1b68d0320f781-Paper.pdf)|
| Bayesian framework for gradient leakage|ICLR|[pdf](https://arxiv.org/pdf/2111.04706.pdf)|
| Catastrophic data leakage in vertical federated learning. |NeurIPS|[pdf](https://proceedings.neurips.cc/paper/2021/file/08040837089cdf46631a10aca5258e16-Paper.pdf)|
| Gradient inversion with generative image prior|NeurIPS|[pdf](https://proceedings.neurips.cc/paper/2021/file/fa84632d742f2729dc32ce8cb5d49733-Paper.pdf)|
| R-gap: Recursive gradient attack on privacy|ICLR|[pdf](https://openreview.net/pdf?id=RSU17UoKfJF)|
| Understanding training-data leakage from gradients in neural networks for image classifications. |NeurIPS workshop|[pdf](https://arxiv.org/pdf/2111.10178.pdf)|
|Soteria: Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective|CVPR|[pdf](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf)|
|Reconstruction Attack on Instance Encoding for Language Understanding|EMNLP|[pdf](https://aclanthology.org/2021.emnlp-main.154.pdf)|
|Source Inference Attacks in Federated Learning|ICDM|[pdf](https://arxiv.org/pdf/2109.05659.pdf)|
|TAG: Gradient Attack on Transformer-based Language Models|EMNLP(findings)|[pdf](https://aclanthology.org/2021.findings-emnlp.305.pdf)|
|Unleashing the Tiger: Inference Attacks on Split Learning|CCS|[pdf](https://dl.acm.org/doi/pdf/10.1145/3460120.3485259)|
## 2020
| Title | Venue | Link |
| --- | --- | --- |
| idlg: Improved deep leakage from gradients | arxiv | [pdf](https://arxiv.org/pdf/2001.02610.pdf) |
| A framework for evaluating client privacy leakages in federated learning | ESORICS | [pdf](https://arxiv.org/pdf/2004.10397.pdf) |
| Inverting gradients how easy is it to break privacy in federated learning? |NeurIPS| [pdf](https://proceedings.neurips.cc/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-Paper.pdf)|
| Sapag: A self adaptive privacy attack from gradients|arxiv |[pdf](https://arxiv.org/pdf/2009.06228.pdf)|
| Is Private Learning Possible with Instance Encoding?|S&P|[pdf](https://arxiv.org/pdf/2011.05315.pdf)|
## 2019
| Title | Venue | Link |
| --- | --- | --- |
| Deep Leakage from Gradients | NeurIPS | [pdf](https://papers.nips.cc/paper/2019/file/60a6c4002cc7b29142def8871531281a-Paper.pdf) |
| Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning|Infocom|[pdf](https://arxiv.org/pdf/1812.00535.pdf)|
| Exploiting Unintended Feature Leakage in Collaborative Learning|S&P|[pdf](https://arxiv.org/pdf/1805.04049.pdf)|
| Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning|S&P|[pdf](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8835245)|
## 2017
| Title | Venue | Link |
| --- | --- | --- |
| Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning | CCS | [pdf](https://arxiv.org/pdf/1702.07464.pdf) |
## Backdoor Attacks in FL
## 2022
| Title | Venue | Link |
| --- | --- | --- |
|Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information |arxiv|[pdf](https://arxiv.org/pdf/2204.05255.pdf)|
|Neurotoxin: Durable Backdoors in Federated Learning|arxiv|[pdf](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-89.pdf)|
## 2021
| Title | Venue | Link |
| --- | --- | --- |
|WaNet - Imperceptible Warping-based Backdoor Attack |ICLR|[pdf](https://arxiv.org/pdf/2102.10369.pdf)|
## 2020
| Title | Venue | Link |
| --- | --- | --- |
|Attack of the Tails: Yes, You Really Can Backdoor Federated Learning|NeurIPS|[pdf](https://papers.nips.cc/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-Paper.pdf)|
|DBA: Distributed Backdoor Attacks against Federated Learning|ICLR|[pdf](https://openreview.net/pdf?id=rkgyS0VFvr)|
|How To Backdoor Federated Learning|AISTATS|[pdf](https://arxiv.org/pdf/1807.00459.pdf)|
## 2019
| Title | Venue | Link |
| --- | --- | --- |
|BadNets: Evaluating Backdooring Attacks on Deep Neural Networks|IEEE Access|[pdf](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8685687)|
|Analyzing Federated Learning through an Adversarial Lens|NeurIPS|[pdf](https://arxiv.org/pdf/1811.12470.pdf)|
## 2017
| Title | Venue | Link |
| --- | --- | --- |
|Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning |arxiv|[pdf](https://arxiv.org/pdf/1712.05526.pdf)|
## Untargeted Attacks in FL
## 2022
| Title | Venue | Link |
| --- | --- | --- |
|Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework|NeurIPS|[pdf](https://openreview.net/pdf?id=4OHRr7gmhd4)|
|Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios | IJCAI |[pdf](https://www.ijcai.org/proceedings/2022/0306.pdf)
## 2021
| Title | Venue | Link |
| --- | --- | --- |
|Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning|NDSS|[pdf](https://par.nsf.gov/servlets/purl/10286354)|
## 2020
| Title | Venue | Link |
| --- | --- | --- |
|Local Model Poisoning Attacks to Byzantine-Robust Federated Learning|USENIX SCEURITY|[pdf](https://www.usenix.org/system/files/sec20summer_fang_prepub.pdf)|
|Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation|UAI|[pdf](http://proceedings.mlr.press/v115/xie20a/xie20a.pdf)|