This Github repository contains a list of Backdoor Stuff resources in AI/ ML/ FL domain.
If our repo or survey is useful for your research, please cite our work as follows:
@misc{nguyen2024,
author = {Tuan, Nguyen},
title = {Backdoor Machine Learning Resources},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/mtuann/backdoor-ai-resources}},
commit = {8f69fd712c05740ebc082b2d2477bf50033f4691}
}
At this repository, we provide the updated papers on backdoor learning. The papers are collected from the dblp database (all venues). The papers are updated until May 11th, 2024.
For searching the papers, you can visit the website.
If you found it useful, please give us a star and fork it.
We divide the papers into following categories:
- Survey
- Federated Learning (FL)
- Machine Unlearning
- Diffusion
- Transformer
- Transfer Learning
- Watermarking
- Few-Shot Learning
- 3D
- Clean-label Backdoor
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Face Recognition
- Time Series
- Certified
- Robustness
- Split Learning
- Physical Backdoor
- Image-Video
- Segmentation
- Perturbation
- Imperceptible
- Graph Learning
- Foundation Models
- Reinforcement Learning
- Others
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning. | CoRR | 2024 | Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Mingli Zhu, Ruotong Wang 0008, Li Liu, Chao Shen | abs/2401.15002 |
2 | Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions. | Eng. Appl. Artif. Intell. | 2024 | Thuy Dung Nguyen, Tuan Nguyen, Phi Le Nguyen, Hieu H. Pham 0001, Khoa D. Doan, Kok-Seng Wong | 127 |
3 | A Comprehensive Survey on Backdoor Attacks and Their Defenses in Face Recognition Systems. | IEEE Access | 2024 | Quentin Le Roux, Eric Bourbao, Yannick Teglia, Kassem Kallas | 12 |
4 | Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions. | IEEE Access | 2024 | Orson Mengara, Anderson R. Avila, Tiago H. Falk | 12 |
5 | Backdoor Learning: A Survey. | IEEE Trans. Neural Networks Learn. Syst. | 2024 | Yiming Li 0004, Yong Jiang 0001, Zhifeng Li 0001, Shu-Tao Xia | 35 |
6 | NLPSweep: A comprehensive defense scheme for mitigating NLP backdoor attacks. | Inf. Sci. | 2024 | Tao Xiang 0001, Fei Ouyang, Di Zhang, Chunlong Xie, Hao Wang 0003 | 661 |
7 | A Comprehensive Overview of Backdoor Attacks in Large Language Models within Communication Networks. | CoRR | 2023 | Haomiao Yang, Kunlan Xiang, Hongwei Li 0001, Rongxing Lu | abs/2308.14367 |
8 | Adversarial Machine Learning: A Systematic Survey of Backdoor Attack, Weight Attack and Adversarial Example. | CoRR | 2023 | Baoyuan Wu, Li Liu, Zihao Zhu, Qingshan Liu, Zhaofeng He, Siwei Lyu | abs/2302.09457 |
9 | Backdoor Attacks against Voice Recognition Systems: A Survey. | CoRR | 2023 | Baochen Yan, Jiahe Lan, Zheng Yan 0002 | abs/2307.13643 |
10 | Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review. | CoRR | 2023 | Pengzhou Cheng, Zongru Wu, Wei Du, Haodong Zhao, Gongshen Liu | abs/2309.06055 |
11 | Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network. | CoRR | 2023 | Fan Liu, Siqi Lai, Yansong Ning, Hao Liu 0026 | abs/2306.10351 |
12 | Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey. | CoRR | 2023 | Yichen Wan, Youyang Qu, Wei Ni 0001, Yong Xiang 0001, Longxiang Gao, Ekram Hossain 0001 | abs/2312.08667 |
13 | Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey. | IEEE Open J. Comput. Soc. | 2023 | Yudong Li, Shigeng Zhang, Weiping Wang 0003, Hong Song | 4 |
14 | Backdoors Against Natural Language Processing: A Review. | IEEE Secur. Priv. | 2022 | Shaofeng Li, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Suguo Du, Haojin Zhu | 20 |
15 | A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. | NeurIPS | 2022 | Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu 0001, Maosong Sun 0001 | |
16 | BackdoorBench: A Comprehensive Benchmark of Backdoor Learning. | NeurIPS | 2022 | Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Chao Shen | |
17 | A Survey on Backdoor Attack and Defense in Natural Language Processing. | QRS | 2022 | Xuan Sheng, Zhaoyang Han, Piji Li, Xiangmao Chang | |
18 | Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks. | ICML | 2021 | Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P. Dickerson, Tom Goldstein | |
19 | Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review. | CoRR | 2020 | Yansong Gao, Bao Gia Doan, Zhi Zhang 0001, Siqi Ma, Jiliang Zhang 0002, Anmin Fu, Surya Nepal, Hyoungshick Kim | abs/2007.10760 |
20 | A Benchmark Study Of Backdoor Data Poisoning Defenses For Deep Neural Network Classifiers And A Novel Defense. | MLSP | 2019 | Zhen Xiang, David J. Miller 0001, George Kesidis |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning. | AAAI | 2024 | Tao Liu, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu 0008, Dapeng Man, Wu Yang 0001 | |
2 | Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective. | AAAI | 2024 | Zhen Qin, Feiyi Chen, Chen Zhi, Xueqiang Yan, Shuiguang Deng | |
3 | Watermarking in Secure Federated Learning: A Verification Framework Based on Client-Side Backdooring. | ACM Trans. Intell. Syst. Technol. | 2024 | Wenyuan Yang, Shuo Shao, Yue Yang, Xiyao Liu 0001, Ximeng Liu, Zhihua Xia, Gerald Schaefer, Hui Fang 0003 | 15 |
4 | Invariant Aggregator for Defending against Federated Backdoor Attacks. | AISTATS | 2024 | Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople | |
5 | Spikewhisper: Temporal Spike Backdoor Attacks on Federated Neuromorphic Learning over Low-power Devices. | CoRR | 2024 | Hanqing Fu, Gaolei Li, Jun Wu 0001, Jianhua Li 0001, Xi Lin 0003, Kai Zhou 0001, Yuchen Liu | abs/2403.18607 |
6 | Time-Distributed Backdoor Attacks on Federated Spiking Learning. | CoRR | 2024 | Gorka Abad, Stjepan Picek, Aitor Urbieta | abs/2402.02886 |
7 | Federated learning backdoor attack detection with persistence diagram. | Comput. Secur. | 2024 | Zihan Ma, Tianchong Gao | 136 |
8 | Universal adversarial backdoor attacks to fool vertical federated learning. | Comput. Secur. | 2024 | Peng Chen, Xin Du, Zhihui Lu 0002, Hongfeng Chai | 137 |
9 | MITDBA: Mitigating Dynamic Backdoor Attacks in Federated Learning for IoT Applications. | IEEE Internet Things J. | 2024 | Yongkang Wang, Di-Hua Zhai, Dongyu Han, Yuyin Guan, Yuanqing Xia | 11 |
10 | PerVK: A Robust Personalized Federated Framework to Defend Against Backdoor Attacks for IoT Applications. | IEEE Trans. Ind. Informatics | 2024 | Yongkang Wang, Di-Hua Zhai, Yuanqing Xia, Danyang Liu | 20 |
11 | Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning. | IEEE Trans. Inf. Forensics Secur. | 2024 | Ying He, Zhili Shen, Jingyu Hua, Qixuan Dong, Jiacheng Niu, Wei Tong, Xu Huang, Chen Li, Sheng Zhong 0002 | 19 |
12 | Privacy-Enhancing and Robust Backdoor Defense for Federated Learning on Heterogeneous Data. | IEEE Trans. Inf. Forensics Secur. | 2024 | Zekai Chen, Shengxing Yu, Mingyuan Fan, Ximeng Liu, Robert H. Deng | 19 |
13 | Unveiling Backdoor Risks Brought by Foundation Models in Heterogeneous Federated Learning. | PAKDD | 2024 | Xi Li, Chen Wu, Jiaqi Wang | |
14 | On the Vulnerability of Backdoor Defenses for Federated Learning. | AAAI | 2023 | Pei Fang, Jinghui Chen | |
15 | Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning. | AAAI | 2023 | Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Bin Wang, Jiqiang Liu, Xiangliang Zhang 0001 | |
16 | FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks. | ACSAC | 2023 | Jorge Castillo, Phillip Rieger, Hossein Fereidooni, Qian Chen 0019, Ahmad-Reza Sadeghi | |
17 | Identifying Backdoor Attacks in Federated Learning via Anomaly Detection. | APWeb/WAIM | 2023 | Yuxi Mi, Yiheng Sun, Jihong Guan, Shuigeng Zhou | |
18 | PerDoor: Persistent Backdoors in Federated Learning using Adversarial Perturbations. | COINS | 2023 | Manaar Alam, Esha Sarkar, Michail Maniatakos | |
19 | Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning. | CoRR | 2023 | Taejin Kim, Jiarui Li, Shubhranshu Singh, Nikhil Madaan, Carlee Joe-Wong | abs/2310.11594 |
20 | BAGEL: Backdoor Attacks against Federated Contrastive Learning. | CoRR | 2023 | Yao Huang, Kongyang Chen, Jiannong Cao 0001, Jiaxing Shen, Shaowei Wang 0003, Yun Peng, Weilong Peng, Kechao Cai | abs/2311.16113 |
21 | Backdoor Attacks in Peer-to-Peer Federated Learning. | CoRR | 2023 | Gökberk Yar, Cristina Nita-Rotaru, Alina Oprea | abs/2301.09732 |
22 | Backdoor Federated Learning by Poisoning Backdoor-Critical Layers. | CoRR | 2023 | Haomin Zhuang, Mingxian Yu, Hao Wang 0022, Yang Hua, Jian Li 0008, Xu Yuan 0001 | abs/2308.04466 |
23 | Backdoor Threats from Compromised Foundation Models to Federated Learning. | CoRR | 2023 | Xi Li, Songhe Wang, Chen Wu, Hao Zhou, Jiaqi Wang 0002 | abs/2311.00144 |
24 | BadVFL: Backdoor Attacks in Vertical Federated Learning. | CoRR | 2023 | Mohammad Naseri, Yufei Han, Emiliano De Cristofaro | abs/2304.08847 |
25 | DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning. | CoRR | 2023 | Wenqiang Sun, Sen Li, Yuchang Sun 0001, Jun Zhang | abs/2305.01267 |
26 | FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on Federated Learning. | CoRR | 2023 | Yanqi Qiao, Dazhuang Liu, Congwen Chen, Rui Wang 0070, Kaitai Liang | abs/2309.00127 |
27 | FedTruth: Byzantine-Robust and Backdoor-Resilient Federated Learning Framework. | CoRR | 2023 | Sheldon C. Ebron Jr., Kan Yang 0001 | abs/2311.10248 |
28 | G2uardFL: Safeguarding Federated Learning Against Backdoor Attacks through Attributed Client Graph Clustering. | CoRR | 2023 | Hao Yu, Chuan Ma, Meng Liu 0014, Xinwang Liu, Zhe Liu 0001, Ming Ding 0001 | abs/2306.04984 |
29 | Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning. | CoRR | 2023 | Manaar Alam, Hithem Lamri, Michail Maniatakos | abs/2304.10638 |
30 | Learning to Backdoor Federated Learning. | CoRR | 2023 | Henger Li, Chen Wu, Sencun Zhu, Zizhan Zheng | abs/2303.03320 |
31 | Mitigating Backdoors in Federated Learning with FLD. | CoRR | 2023 | Yihang Lin, Pengyuan Zhou, Zhiqian Wu, Yong Liao | abs/2303.00302 |
32 | Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation. | CoRR | 2023 | Yanxin Yang, Ming Hu 0003, Yue Cao, Jun Xia, Yihao Huang 0001, Yang Liu 0003, Mingsong Chen | abs/2308.11333 |
33 | Universal Adversarial Backdoor Attacks to Fool Vertical Federated Learning in Cloud-Edge Collaboration. | CoRR | 2023 | Peng Chen, Xin Du, Zhihui Lu 0002, Hongfeng Chai | abs/2304.11432 |
34 | You Can Backdoor Personalized Federated Learning. | CoRR | 2023 | Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li 0067, Ming Gao 0001 | abs/2307.15971 |
35 | ADFL: Defending backdoor attacks in federated learning via adversarial distillation. | Comput. Secur. | 2023 | Chengcheng Zhu, Jiale Zhang, Xiaobing Sun 0001, Bing Chen 0002, Weizhi Meng 0001 | 132 |
36 | LR-BA: Backdoor attack against vertical federated learning using local latent representations. | Comput. Secur. | 2023 | Yuhao Gu, Yuebin Bai | 129 |
37 | SCFL: Mitigating backdoor attacks in federated learning based on SVD and clustering. | Comput. Secur. | 2023 | Yongkang Wang, Di-Hua Zhai, Yuanqing Xia | 133 |
38 | Practical and General Backdoor Attacks Against Vertical Federated Learning. | ECML/PKDD | 2023 | Yuexin Xuan, Xiaojun Chen 0004, Zhendong Zhao, Bisheng Tang, Ye Dong | |
39 | An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning. | FMEC | 2023 | Qiuxian Chen, Yizheng Tao | |
40 | Genetic Algorithm-Based Dynamic Backdoor Attack on Federated Learning-Based Network Traffic Classification. | FMEC | 2023 | Mahmoud Nazzal, Nura Aljaafari, Ahmad H. Sawalmeh, Abdallah Khreishah, Muhammad Anan, Abdulelah Abdallah Algosaibi, Mohammed Alnaeem, Adel Aldalbahi, Abdulaziz Alhumam, Conrado P. Vizcarra, Shadan Alhamed | |
41 | An adaptive robust defending algorithm against backdoor attacks in federated learning. | Future Gener. Comput. Syst. | 2023 | Yongkang Wang, Di-Hua Zhai, Yongping He, Yuanqing Xia | 143 |
42 | Knowledge Distillation Based Defense for Audio Trigger Backdoor in Federated Learning. | GLOBECOM | 2023 | Yu-Wen Chen, Bo-Hsu Ke, Bozhong Chen, Si-Rong Chiu, Chun-Wei Tu, Jian-Jhih Kuo | |
43 | PBE-Plan: Periodic Backdoor Erasing Plan for Trustworthy Federated Learning. | HPCC/DSS/SmartCity/DependSys | 2023 | Bei Chen, Gaolei Li, Mingzhe Chen, Yuchen Liu, Xiaoyu Yi 0003, Jianhua Li 0001 | |
44 | Backdoor Attack Against Automatic Speaker Verification Models in Federated Learning. | ICASSP | 2023 | Dan Meng, Xue Wang, Jun Wang | |
45 | FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor Attacks. | ICC | 2023 | Weiqi Wang, Chenhan Zhang, Shushu Liu, Mingjian Tang 0002, An Liu 0002, Shui Yu 0001 | |
46 | Successive Interference Cancellation Based Defense for Trigger Backdoor in Federated Learning. | ICC | 2023 | Yu-Wen Chen, Bo-Hsu Ke, Bozhong Chen, Si-Rong Chiu, Chun-Wei Tu, Jian-Jhih Kuo | |
47 | Towards Defending Adaptive Backdoor Attacks in Federated Learning. | ICC | 2023 | Han Yang, Dongbing Gu, Jianhua He | |
48 | Multi-metrics adaptively identifies backdoors in Federated learning. | ICCV | 2023 | Siquan Huang, Yijiang Li, Chong Chen, Leyu Shi, Ying Gao 0004 | |
49 | ScanFed: Scalable Behavior-Based Backdoor Detection in Federated Learning. | ICDCS | 2023 | Rui Ning, Jiang Li, Chunsheng Xin, Chonggang Wang, Xu Li, Robert Gazda, Jin-Hee Cho, Hongyi Wu | |
50 | A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning. | ICDM | 2023 | Peng Chen, Jirui Yang, Junxiong Lin, Zhihui Lu 0002, Qiang Duan, Hongfeng Chai | |
51 | FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning. | ICLR | 2023 | Kaiyuan Zhang 0002, Guanhong Tao 0001, Qiuling Xu, Siyuan Cheng 0005, Shengwei An, Yingqi Liu, Shiwei Feng 0002, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, Xiangyu Zhang 0001 | |
52 | Fedward: Flexible Federated Backdoor Defense Framework with Non-IID Data. | ICME | 2023 | Zekai Chen, Fuyi Wang, Zhiwei Zheng, Ximeng Liu, Yujie Lin | |
53 | Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning. | ICML | 2023 | Yanbo Dai, Songze Li | |
54 | RPFL: Robust and Privacy Federated Learning against Backdoor and Sample Inference Attacks. | ICPADS | 2023 | Di Xiao, Zhuyang Yu, Lvjun Chen | |
55 | SemSBA: Semantic-perturbed Stealthy Backdoor Attack on Federated Semi-supervised Learning. | ICPADS | 2023 | Yingrui Tong, Jun Feng, Gaolei Li, Xi Lin 0003, Chengcheng Zhao, Xiaoyu Yi, Jianhua Li 0001 | |
56 | A Max-Min Security Game for Coordinated Backdoor Attacks on Federated Learning. | IEEE Big Data | 2023 | Omar Abdel Wahab 0001, Anderson Avila | |
57 | Facilitating Early-Stage Backdoor Attacks in Federated Learning With Whole Population Distribution Inference. | IEEE Internet Things J. | 2023 | Tian Liu, Xueyang Hu, Tao Shu | 10 |
58 | SAFELearning: Secure Aggregation in Federated Learning With Backdoor Detectability. | IEEE Trans. Inf. Forensics Secur. | 2023 | Zhuosheng Zhang 0003, Jiarui Li, Shucheng Yu, Christian Makaya | 18 |
59 | Backdoor Attacks and Defenses in Federated Learning: State-of-the-Art, Taxonomy, and Future Directions. | IEEE Wirel. Commun. | 2023 | Xueluan Gong, Yanjiao Chen, Qian Wang 0002, Weihan Kong | 30 |
60 | FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection. | IJCNN | 2023 | Thuy Dung Nguyen, Anh Duy Nguyen, Thanh-Hung Nguyen, Kok-Seng Wong, Huy Hieu Pham 0001, Truong Thao Nguyen, Phi Le Nguyen | |
61 | Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios. | IJCNN | 2023 | Haochen Mei, Gaolei Li, Jun Wu 0001, Longfei Zheng | |
62 | Robust Federated Learning against Backdoor Attackers. | INFOCOM Workshops | 2023 | Priyesh Ranjan, Ashish Gupta 0012, Federico Coro, Sajal K. Das 0001 | |
63 | Content Style-triggered Backdoor Attack in Non-IID Federated Learning via Generative AI. | ISPA/BDCloud/SocialCom/SustainCom | 2023 | Jinke Cheng, Gaolei Li, Xi Lin 0003, Hao Peng 0001, Jianhua Li 0001 | |
64 | Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning. | Inf. Sci. | 2023 | Deshan Yang, Senlin Luo, Jinjie Zhou, Limin Pan, Xiaonan Yang, Jiyuan Xing | 651 |
65 | IPCADP-Equalizer: An Improved Multibalance Privacy Preservation Scheme against Backdoor Attacks in Federated Learning. | Int. J. Intell. Syst. | 2023 | Wenjuan Lian, Yichi Zhang, Xin Chen, Bin Jia, Xiaosong Zhang 0001 | 2023 |
66 | Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks. | KDD | 2023 | Zeyu Qin, Liuyi Yao, Daoyuan Chen, Yaliang Li, Bolin Ding, Minhao Cheng | |
67 | Backdoor attack and defense in federated generative adversarial network-based medical image synthesis. | Medical Image Anal. | 2023 | Ruinan Jin, Xiaoxiao Li | 90 |
68 | Evil vs evil: using adversarial examples to against backdoor attack in federated learning. | Multim. Syst. | 2023 | Tao Liu, Mingjun Li, Haibin Zheng, Zhaoyan Ming, Jinyin Chen | 29 |
69 | A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning. | NeurIPS | 2023 | Hangfan Zhang, Jinyuan Jia, Jinghui Chen, Lu Lin 0001, Dinghao Wu | |
70 | Fed-FA: Theoretically Modeling Client Data Divergence for Federated Language Backdoor Defense. | NeurIPS | 2023 | Zhiyuan Zhang 0001, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, Xu Sun 0001 | |
71 | FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning. | NeurIPS | 2023 | Jinyuan Jia, Zhuowen Yuan, Dinuka Sahabandu, Luyao Niu, Arezoo Rajabi, Bhaskar Ramasubramanian, Bo Li, Radha Poovendran | |
72 | IBA: Towards Irreversible Backdoor Attacks in Federated Learning. | NeurIPS | 2023 | Thuy Dung Nguyen, Tuan Nguyen, Anh Tran, Khoa D. Doan, Kok-Seng Wong | |
73 | Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training. | NeurIPS | 2023 | Tiansheng Huang, Sihao Hu, Ka Ho Chow, Fatih Ilhan, Selim F. Tekin, Ling Liu 0001 | |
74 | Defending Federated Learning from Backdoor Attacks: Anomaly-Aware FedAVG with Layer-Based Aggregation. | PIMRC | 2023 | Habib Ullah Manzoor, Ahsan Raza Khan, Tahir Sher, Wasim Ahmad, Ahmed Zoha | |
75 | FedDefender: Backdoor Attack Defense in Federated Learning. | SE4SafeML@SIGSOFT FSE | 2023 | Waris Gill, Ali Anwar 0001, Muhammad Ali Gulzar | |
76 | BATFL: Battling Backdoor Attacks in Federated Learning. | SIN | 2023 | Mayank Kumar, Radha Agrawal, Priyanka Singh | |
77 | 3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning. | SP | 2023 | Haoyang Li, Qingqing Ye 0001, Haibo Hu 0001, Jin Li 0002, Leixia Wang, Chengfang Fang, Jie Shi | |
78 | BayBFed: Bayesian Backdoor Defense for Federated Learning. | SP | 2023 | Kavita Kumari, Phillip Rieger, Hossein Fereidooni, Murtuza Jadliwala, Ahmad-Reza Sadeghi | |
79 | Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning. | Sensors | 2023 | Jie Yang, Jun Zheng 0007, Haochen Wang, Jiaxing Li, Haipeng Sun, Weifeng Han, Nan Jiang, Yu-An Tan 0001 | 23 |
80 | FUBA: Federated Uncovering of Backdoor Attacks for Heterogeneous Data. | TPS-ISA | 2023 | Fabiola Espinoza Castellon, Deepika Singh, Aurelien Mayoue, Cedric Gouy-Pailler | |
81 | Poison Egg: Scrambling Federated Learning with Delayed Backdoor Attack. | UbiSec | 2023 | Masayoshi Tsutsui, Tatsuya Kaneko, Shinya Takamaeda-Yamazaki | |
82 | Defending against Poisoning Backdoor Attacks on Federated Meta-learning. | ACM Trans. Intell. Syst. Technol. | 2022 | Chien-Lun Chen, Sara Babakniya, Marco Paolieri, Leana Golubchik | 13 |
83 | On the Neural Backdoor of Federated Generative Models in Edge Computing. | ACM Trans. Internet Techn. | 2022 | Derui Wang, Sheng Wen, Alireza Jolfaei, Mohammad Sayad Haghighi, Surya Nepal, Yang Xiang 0001 | 22 |
84 | More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks. | ACSAC | 2022 | Jing Xu, Rui Wang 0070, Stefanos Koffas, Kaitai Liang, Stjepan Picek | |
85 | A Federated Learning Backdoor Attack Defense. | BigDataService | 2022 | Jin Yan, Yingchi Mao, Hua Nie, Zijian Tu, Jianxin Huang | |
86 | A Knowledge Distillation-Based Backdoor Attack in Federated Learning. | CoRR | 2022 | Yifan Wang, Wei Fan, Keke Yang, Naji Alhusaini, Jing Li 0055 | abs/2208.06176 |
87 | ARIBA: Towards Accurate and Robust Identification of Backdoor Attacks in Federated Learning. | CoRR | 2022 | Yuxi Mi, Jihong Guan, Shuigeng Zhou | abs/2202.04311 |
88 | Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection. | CoRR | 2022 | Yein Kim, Huili Chen, Farinaz Koushanfar | abs/2202.11196 |
89 | Client-Wise Targeted Backdoor in Federated Learning. | CoRR | 2022 | Gorka Abad, Servio Paguada, Stjepan Picek, Víctor Julio Ramírez-Durán, Aitor Urbieta | abs/2203.08689 |
90 | Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis. | CoRR | 2022 | Phillip Rieger, Torsten Krauß, Markus Miettinen, Alexandra Dmitrienko, Ahmad-Reza Sadeghi | abs/2210.07714 |
91 | Invariant Aggregator for Defending Federated Backdoor Attacks. | CoRR | 2022 | Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople | abs/2210.01834 |
92 | Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning. | CoRR | 2022 | Phung Lai, NhatHai Phan, Abdallah Khreishah, Issa Khalil, Xintao Wu | abs/2201.07063 |
93 | PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations. | CoRR | 2022 | Manaar Alam, Esha Sarkar, Michail Maniatakos | abs/2205.13523 |
94 | RFLBAT: A Robust Federated Learning Algorithm against Backdoor Attack. | CoRR | 2022 | Yongkang Wang, Dihua Zhai, Yufeng Zhan, Yuanqing Xia | abs/2201.03772 |
95 | Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment. | CoRR | 2022 | Tian Liu, Xueyang Hu, Tao Shu | abs/2207.12327 |
96 | Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning. | CoRR | 2022 | Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa, Micah Goldblum, Tom Goldstein | abs/2210.09305 |
97 | Towards a Defense against Backdoor Attacks in Continual Federated Learning. | CoRR | 2022 | Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, Sewoong Oh | abs/2205.11736 |
98 | XMAM: X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning. | CoRR | 2022 | Jianyi Zhang, Fangjiao Zhang, Qichao Jin, Zhiqiang Wang 0006, Xiaodong Lin, Xiali Hei 0001 | abs/2212.13675 |
99 | Defense against backdoor attack in federated learning. | Comput. Secur. | 2022 | Shiwei Lu, Ruihu Li, Wenbin Liu, Xuan Chen | 121 |
100 | Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling. | EMNLP | 2022 | KiYoon Yoo, Nojun Kwak | |
101 | Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation. | EMNLP | 2022 | Zhiyuan Zhang 0001, Qi Su 0001, Xu Sun 0001 | |
102 | Propagable Backdoors over Blockchain-based Federated Learning via Sample-Specific Eclipse. | GLOBECOM | 2022 | Zheng Yang 0002, Gaolei Li, Jun Wu 0001, Wu Yang 0001 | |
103 | Against Backdoor Attacks In Federated Learning With Differential Privacy. | ICASSP | 2022 | Lu Miao, Wei Yang 0011, Rong Hu, Lu Li, Liusheng Huang | |
104 | Toward Cleansing Backdoored Neural Networks in Federated Learning. | ICDCS | 2022 | Chen Wu, Xian Yang, Sencun Zhu, Prasenjit Mitra | |
105 | Neurotoxin: Durable Backdoors in Federated Learning. | ICML | 2022 | Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael W. Mahoney, Prateek Mittal, Kannan Ramchandran, Joseph Gonzalez 0001 | |
106 | A highly efficient, confidential, and continuous federated learning backdoor attack strategy. | ICMLC | 2022 | Jiarui Cao, Liehuang Zhu | |
107 | Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers. | IEEE Netw. | 2022 | Xueluan Gong, Yanjiao Chen, Huayang Huang, Yuqing Liao, Shuai Wang, Qian Wang 0002 | 36 |
108 | Backdoor Federated Learning-Based mmWave Beam Selection. | IEEE Trans. Commun. | 2022 | Zhengming Zhang, Ruming Yang, Xiangyu Zhang 0013, Chunguo Li, Yongming Huang, Luxi Yang | 70 |
109 | Mitigating the Backdoor Attack by Federated Filters for Industrial IoT Applications. | IEEE Trans. Ind. Informatics | 2022 | Boyu Hou, Jiqiang Gao, Xiaojie Guo 0004, Thar Baker, Ying Zhang 0015, Yanlong Wen, Zheli Liu | 18 |
110 | Backdoor attacks-resilient aggregation based on Robust Filtering of Outliers in federated learning for image classification. | Knowl. Based Syst. | 2022 | Nuria Rodríguez Barroso, Eugenio Martínez-Cámara, María Victoria Luzón, Francisco Herrera | 245 |
111 | Breaking Distributed Backdoor Defenses for Federated Learning in Non-IID Settings. | MSN | 2022 | Jijia Yang, Jiangang Shu, Xiaohua Jia | |
112 | Distributed Swift and Stealthy Backdoor Attack on Federated Learning. | NAS | 2022 | Agnideven Palanisamy Sundar, Feng Li 0001, Xukai Zou, Tianchong Gao | |
113 | DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection. | NDSS | 2022 | Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, Ahmad-Reza Sadeghi | |
114 | Robust Federated Learning for Ubiquitous Computing through Mitigation of Edge-Case Backdoor Attacks. | Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. | 2022 | Fatima Elhattab, Sara Bouchenak, Rania Talbi, Vlad Nitu | 6 |
115 | Backdoor Attack is a Devil in Federated GAN-Based Medical Image Synthesis. | SASHIMI@MICCAI | 2022 | Ruinan Jin, Xiaoxiao Li | |
116 | Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment in Mobile Edge Computing. | SECON | 2022 | Tian Liu, Xueyang Hu, Tao Shu | |
117 | Never Too Late: Tracing and Mitigating Backdoor Attacks in Federated Learning. | SRDS | 2022 | Hui Zeng, Tongqing Zhou, Xinyi Wu, Zhiping Cai | |
118 | FLAME: Taming Backdoors in Federated Learning. | USENIX Security Symposium | 2022 | Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, Thomas Schneider 0003 | |
119 | Defending against Backdoors in Federated Learning with Robust Learning Rate. | AAAI | 2021 | Mustafa Safa Özdayi, Murat Kantarcioglu, Yulia R. Gel | |
120 | Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning. | CoRR | 2021 | abs/2111.14683 | |
121 | Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis. | CoRR | 2021 | Zeyuan Yin, Ye Yuan, Panfeng Guo, Pan Zhou | abs/2109.10512 |
122 | Defending Label Inference and Backdoor Attacks in Vertical Federated Learning. | CoRR | 2021 | Yang Liu 0165, Zhihao Yi, Yan Kang 0001, Yuanqin He, Wenhan Liu, Tianyuan Zou, Qiang Yang 0001 | abs/2112.05409 |
123 | SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation. | CoRR | 2021 | Zhuosheng Zhang 0003, Jiarui Li, Shucheng Yu, Christian Makaya | abs/2102.02402 |
124 | BaFFLe: Backdoor Detection via Feedback-based Federated Learning. | ICDCS | 2021 | Sébastien Andreina, Giorgia Azzurra Marson, Helen Möllering, Ghassan Karame | |
125 | CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. | ICML | 2021 | Chulin Xie, Minghao Chen 0001, Pin-Yu Chen, Bo Li 0026 | |
126 | Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach. | IEEE BigData | 2021 | Yifan Guo, Qianlong Wang, Tianxi Ji, Xufei Wang, Pan Li 0001 | |
127 | FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning. | IH&MMSec | 2021 | Chen Zhao, Yu Wen, Shuailou Li, Fucheng Liu, Dan Meng | |
128 | BatFL: Backdoor Detection on Federated Learning in e-Health. | IWQoS | 2021 | Binhan Xi, Shaofeng Li, Jiachun Li, Hui Liu, Hong Liu, Haojin Zhu | |
129 | Secure Federated Learning Model Verification: A Client-side Backdoor Triggered Watermarking Scheme. | SMC | 2021 | Xiyao Liu 0001, Shuo Shao, Yue Yang, Kangming Wu, Wenyuan Yang, Hui Fang 0003 | |
130 | How To Backdoor Federated Learning. | AISTATS | 2020 | Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, Vitaly Shmatikov | |
131 | Backdoor Attacks on Federated Meta-Learning. | CoRR | 2020 | Chien-Lun Chen, Leana Golubchik, Marco Paolieri | abs/2006.07026 |
132 | Dynamic backdoor attacks against federated learning. | CoRR | 2020 | abs/2011.07429 | |
133 | Mitigating Backdoor Attacks in Federated Learning. | CoRR | 2020 | Chen Wu, Xian Yang, Sencun Zhu, Prasenjit Mitra | abs/2011.01767 |
134 | DBA: Distributed Backdoor Attacks against Federated Learning. | ICLR | 2020 | Chulin Xie, Keli Huang, Pin-Yu Chen, Bo Li 0026 | |
135 | Attack of the Tails: Yes, You Really Can Backdoor Federated Learning. | NeurIPS | 2020 | Hongyi Wang 0001, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee 0001, Dimitris S. Papailiopoulos | |
136 | Can You Really Backdoor Federated Learning? | CoRR | 2019 | Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, H. Brendan McMahan | abs/1911.07963 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Backdoor Attacks via Machine Unlearning. | AAAI | 2024 | Zihao Liu, Tianhao Wang 0001, Mengdi Huai, Chenglin Miao | |
2 | UMA: Facilitating Backdoor Scanning via Unlearning-Based Model Ablation. | AAAI | 2024 | Yue Zhao, Congyi Li, Kai Chen 0012 | |
3 | Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning. | CoRR | 2024 | Siyuan Liang, Kuanrong Liu, Jiajun Gong, Jiawei Liang, Yuan Xun, Ee-Chien Chang, Xiaochun Cao | abs/2403.16257 |
4 | Verifying in the Dark: Verifiable Machine Unlearning by Using Invisible Backdoor Triggers. | IEEE Trans. Inf. Forensics Secur. | 2024 | Yu Guo 0003, Yu Zhao, Saihui Hou, Cong Wang 0001, Xiaohua Jia | 19 |
5 | Backdoor Attack through Machine Unlearning. | CoRR | 2023 | Peixin Zhang, Jun Sun 0001, Mingtian Tan, Xinyu Wang | abs/2310.10659 |
6 | Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples. | NeurIPS | 2023 | Shaokui Wei, Mingda Zhang, Hongyuan Zha, Baoyuan Wu | |
7 | Adversarial Unlearning of Backdoors via Implicit Hypergradient. | ICLR | 2022 | Yi Zeng 0005, Si Chen, Won Park, Zhuoqing Mao, Ming Jin 0002, Ruoxi Jia 0001 | |
8 | Backdoor Defense with Machine Unlearning. | INFOCOM | 2022 | Yang Liu 0118, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang 0056, Jianfeng Ma 0001 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models. | AAAI | 2024 | Jiachen Zhou, Peizhuo Lv, Yibing Lan, Guozhu Meng, Kai Chen 0012, Hualong Ma | |
2 | Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift. | AAAI | 2024 | Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang 0002, Qiuling Xu, Guanhong Tao 0001, Guangyu Shen, Siyuan Cheng 0005, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, Xiangyu Zhang | |
3 | Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models. | AAAI | 2024 | Yihao Huang 0001, Felix Juefei-Xu, Qing Guo, Jie Zhang, Yutong Wu 0009, Ming Hu 0003, Tianlin Li, Geguang Pu, Yang Liu | |
4 | DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models. | CoRR | 2024 | Yang Sui, Huy Phan, Jinqi Xiao, Tianfang Zhang, Zijie Tang, Cong Shi 0004, Yan Wang 0003, Yingying Chen 0001, Bo Yuan | abs/2402.02739 |
5 | Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion. | CoRR | 2024 | Hossein Souri, Arpit Bansal, Hamid Kazemi, Liam Fowl, Aniruddha Saha, Jonas Geiping, Andrew Gordon Wilson, Rama Chellappa, Tom Goldstein, Micah Goldblum | abs/2403.16365 |
6 | The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline. | CoRR | 2024 | Haonan Wang, Qianli Shen, Yao Tong, Yang Zhang, Kenji Kawaguchi | abs/2401.04136 |
7 | The last Dance : Robust backdoor attack via diffusion models and bayesian approach. | CoRR | 2024 | abs/2402.05967 | |
8 | UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models. | CoRR | 2024 | Zihan Guan 0001, Mengxuan Hu, Sheng Li 0001, Anil Vullikanti | abs/2404.01101 |
9 | Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias. | ACL | 2023 | Zhiyuan Zhang 0001, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou 0016, Xu Sun 0001 | |
10 | Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. | ACM Multimedia | 2023 | Shengfang Zhai, Yinpeng Dong, Qingni Shen, Shi Pu, Yuejian Fang, Hang Su 0006 | |
11 | How to Backdoor Diffusion Models? | CVPR | 2023 | Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho | |
12 | From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models. | CoRR | 2023 | Zhuoshi Pan, Yuguang Yao, Gaowen Liu, Bingquan Shen, H. Vicky Zhao, Ramana Rao Kompella, Sijia Liu 0001 | abs/2311.02373 |
13 | Salient Conditional Diffusion for Defending Against Backdoor Attacks. | CoRR | 2023 | Brandon B. May, N. Joseph Tatro, Piyush Kumar, Nathan Shnidman | abs/2301.13862 |
14 | Zero-Day Backdoor Attack against Text-to-Image Diffusion Models via Personalization. | CoRR | 2023 | Yihao Huang 0001, Qing Guo 0005, Felix Juefei-Xu | abs/2305.10701 |
15 | VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models. | NeurIPS | 2023 | Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | A Closer Look at Robustness of Vision Transformers to Backdoor Attacks. | WACV | 2024 | Akshayvarun Subramanya, Soroush Abbasi Koohpayegani, Aniruddha Saha, Ajinkya Tejankar, Hamed Pirsiavash | |
2 | Defending Backdoor Attacks on Vision Transformer via Patch Processing. | AAAI | 2023 | Khoa D. Doan, Yingjie Lao, Peng Yang 0013, Ping Li 0001 | |
3 | You Are Catching My Attention: Are Vision Transformers Bad Learners under Backdoor Attacks? | CVPR | 2023 | Zenghui Yuan, Pan Zhou, Kai Zou, Yu Cheng 0001 | |
4 | Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data. | CoRR | 2023 | Bart Pleiter, Behrad Tajalli, Stefanos Koffas, Gorka Abad, Jing Xu, Martha A. Larson, Stjepan Picek | abs/2311.07550 |
5 | DBIA: Data-Free Backdoor Attack Against Transformer Networks. | ICME | 2023 | Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen 0012, Shengzhi Zhang, Yunfei Yang | |
6 | "We Must Protect the Transformers": Understanding Efficacy of Backdoor Attack Mitigation on Transformer Models. | SPACE | 2023 | Rohit Raj, Biplab Roy, Abir Das, Mainack Mondal | |
7 | Backdoor Attacks on Vision Transformers. | CoRR | 2022 | Akshayvarun Subramanya, Aniruddha Saha, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash | abs/2206.08477 |
8 | Saisiyat Is Where It Is At! Insights Into Backdoors And Debiasing Of Cross Lingual Transformers For Named Entity Recognition. | IEEE Big Data | 2022 | Ricardo A. Calix, J. J. Ben-Joseph, Nina Lopatina, Ryan Ashley, Mona Gogia, George Sieniawski, Andrea Brennen | |
9 | Piccolo: Exposing Complex Backdoors in NLP Transformer Models. | SP | 2022 | Yingqi Liu, Guangyu Shen, Guanhong Tao 0001, Shengwei An, Shiqing Ma, Xiangyu Zhang 0001 | |
10 | DBIA: Data-free Backdoor Injection Attack against Transformer Networks. | CoRR | 2021 | Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen 0012, Shengzhi Zhang, Yunfei Yang | abs/2111.11870 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Active poisoning: efficient backdoor attacks on transfer learning-based brain-computer interfaces. | Sci. China Inf. Sci. | 2023 | Xue Jiang, Lubin Meng, Siyang Li, Dongrui Wu | 66 |
2 | Triggerability of Backdoor Attacks in Multi-Source Transfer Learning-based Intrusion Detection. | BDCAT | 2022 | Nour Alhussien, Ahmed Aleroud, Reza Rahaeimehr, Alexander Schwarzmann | |
3 | Backdoor Attacks Against Transfer Learning With Pre-Trained Deep Learning Models. | IEEE Trans. Serv. Comput. | 2022 | Shuo Wang 0012, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, Tianle Chen | 15 |
4 | A Novel Backdoor Attack Adapted to Transfer Learning. | SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta | 2022 | Peihao Li, Jie Huang 0016, Shuaishuai Zhang, Chunyang Qi, Chuang Liang, Yang Peng |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Spy-Watermark: Robust Invisible Watermarking for Backdoor Attack. | CoRR | 2024 | Ruofei Wang, Renjie Wan, Zongyu Guo, Qing Guo, Rui Huang | abs/2401.02031 |
2 | WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection. | CoRR | 2024 | Anudeex Shetty, Yue Teng, Ke He, Qiongkai Xu | abs/2403.01472 |
3 | Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark. | ACL | 2023 | Wenjun Peng, Jingwei Yi, Fangzhao Wu, Shangxi Wu, Bin Zhu, Lingjuan Lyu, Binxing Jiao, Tong Xu 0001, Guangzhong Sun, Xing Xie 0001 | |
4 | Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking. | CoRR | 2023 | Ruixiang Tang, Qizhang Feng, Ninghao Liu, Fan Yang 0023, Xia Hu 0001 | abs/2303.11470 |
5 | Watermarking Graph Neural Networks based on Backdoor Attacks. | EuroS&P | 2023 | Jing Xu, Stefanos Koffas, Oguzhan Ersoy, Stjepan Picek | |
6 | Measure and Countermeasure of the Capsulation Attack Against Backdoor-Based Deep Neural Network Watermarks. | ICASSP | 2023 | Fang-Qi Li, Shi-Lin Wang, Yun Zhu | |
7 | Watermarks for Generative Adversarial Network Based on Steganographic Invisible Backdoor. | ICME | 2023 | Yuwei Zeng, Jingxuan Tan, Zhengxin You, Zhenxing Qian, Xinpeng Zhang 0001 | |
8 | Black-Box Dataset Ownership Verification via Backdoor Watermarking. | IEEE Trans. Inf. Forensics Secur. | 2023 | Yiming Li 0004, Mingyan Zhu, Xue Yang 0003, Yong Jiang 0001, Tao Wei, Shu-Tao Xia | 18 |
9 | Deep fidelity in DNN watermarking: A study of backdoor watermarking for classification models. | Pattern Recognit. | 2023 | Guang Hua 0001, Andrew Beng Jin Teoh | 144 |
10 | Did You Train on My Dataset? Towards Public Dataset Protection with CleanLabel Backdoor Watermarking. | SIGKDD Explor. | 2023 | Ruixiang Tang, Qizhang Feng, Ninghao Liu, Fan Yang 0023, Xia Hu 0001 | 25 |
11 | Backdoor Watermarking Deep Learning Classification Models With Deep Fidelity. | CoRR | 2022 | Guang Hua 0001, Andrew Beng Jin Teoh | abs/2208.00563 |
12 | Black-box Ownership Verification for Dataset Protection via Backdoor Watermarking. | CoRR | 2022 | Yiming Li 0004, Mingyan Zhu, Xue Yang 0003, Yong Jiang 0001, Shu-Tao Xia | abs/2209.06015 |
13 | Solving the Capsulation Attack against Backdoor-based Deep Neural Network Watermarks by Reversing Triggers. | CoRR | 2022 | Fangqi Li, Shilin Wang, Yun Zhu | abs/2208.14127 |
14 | Watermarking Pre-trained Language Models with Backdooring. | CoRR | 2022 | Chenxi Gu, Chengsong Huang, Xiaoqing Zheng, Kai-Wei Chang, Cho-Jui Hsieh | abs/2210.07543 |
15 | TextBack: Watermarking Text Classifiers using Backdooring. | DSD | 2022 | Nandish Chattopadhyay, Rajan Kataria, Anupam Chattopadhyay | |
16 | Image Watermarking Backdoor Attacks in CNN-Based Classification Tasks. | ICPR Workshops | 2022 | Giovanbattista Abbate, Irene Amerini, Roberto Caldelli | |
17 | BlindNet backdoor: Attack on deep neural network using blind watermark. | Multim. Tools Appl. | 2022 | Hyun Kwon, Yongchul Kim | 81 |
18 | Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection. | NeurIPS | 2022 | Yiming Li 0004, Yang Bai, Yong Jiang 0001, Yong Yang 0001, Shu-Tao Xia, Bo Li | |
19 | Neural network laundering: Removing black-box backdoor watermarks from deep neural networks. | Comput. Secur. | 2021 | William Aiken, Hyoungshick Kim, Simon S. Woo, Jungwoo Ryoo | 106 |
20 | ROWBACK: RObust Watermarking for neural networks using BACKdoors. | ICMLA | 2021 | Nandish Chattopadhyay, Anupam Chattopadhyay | |
21 | On the Robustness of Backdoor-based Watermarking in Deep Neural Networks. | IH&MMSec | 2021 | Masoumeh Shafieinejad, Nils Lukas, Jiaqi Wang, Xinda Li 0001, Florian Kerschbaum | |
22 | Open-sourced Dataset Protection via Backdoor Watermarking. | CoRR | 2020 | Yiming Li 0004, Ziqi Zhang, Jiawang Bai, Baoyuan Wu, Yong Jiang 0001, Shu-Tao Xia | abs/2010.05821 |
23 | Removing Backdoor-Based Watermarks in Neural Networks with Limited Data. | ICPR | 2020 | Xuankai Liu, Fengting Li, Bihan Wen, Qi Li 0002 | |
24 | On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks. | CoRR | 2019 | Masoumeh Shafieinejad, Jiaqi Wang, Nils Lukas, Florian Kerschbaum | abs/1906.07745 |
25 | Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. | USENIX Security Symposium | 2018 | Yossi Adi, Carsten Baum, Moustapha Cissé, Benny Pinkas, Joseph Keshet |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Does Few-Shot Learning Suffer from Backdoor Attacks? | AAAI | 2024 | Xinwei Liu, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang, Xiaochun Cao | |
2 | ACQ: Few-shot Backdoor Defense via Activation Clipping and Quantizing. | ACM Multimedia | 2023 | Yulin Jin, Xiaoyu Zhang, Jian Lou 0001, Xiaofeng Chen 0001 | |
3 | Few-shot Backdoor Attacks via Neural Tangent Kernels. | ICLR | 2023 | Jonathan Hayase, Sewoong Oh | |
4 | Black-box Backdoor Defense via Zero-shot Image Purification. | NeurIPS | 2023 | Yucheng Shi, Mengnan Du, Xuansheng Wu, Zihan Guan 0001, Jin Sun, Ninghao Liu | |
5 | Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks. | NeurIPS | 2023 | Zhaohan Xi, Tianyu Du, Changjiang Li, Ren Pang, Shouling Ji, Jinghui Chen, Fenglong Ma, Ting Wang | |
6 | Few-shot Backdoor Defense Using Shapley Estimation. | CVPR | 2022 | Jiyang Guan, Zhuozhuo Tu, Ran He, Dacheng Tao | |
7 | Few-Shot Backdoor Attacks on Visual Object Tracking. | ICLR | 2022 | Yiming Li 0004, Haoxiang Zhong, Xingjun Ma, Yong Jiang 0001, Shu-Tao Xia |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Invisible Backdoor Attack against 3D Point Cloud Classifier in Graph Spectral Domain. | AAAI | 2024 | Linkun Fan, Fazhi He, Tongzhen Si, Wei Tang, Bing Li | |
2 | MirrorAttack: Backdoor Attack on 3D Point Cloud with a Distorting Mirror. | CoRR | 2024 | Yuhao Bian, Shengjing Tian, Xiuping Liu | abs/2403.05847 |
3 | Imperceptible and Robust Backdoor Attack in 3D Point Cloud. | IEEE Trans. Inf. Forensics Secur. | 2024 | Kuofeng Gao, Jiawang Bai, Baoyuan Wu, Mengxi Ya, Shu-Tao Xia | 19 |
4 | MBA: Backdoor Attacks Against 3D Mesh Classifier. | IEEE Trans. Inf. Forensics Secur. | 2024 | Linkun Fan, Fazhi He, Tongzhen Si, Rubin Fan, Chuanlong Ye, Bing Li 0010 | 19 |
5 | PointCRT: Detecting Backdoor in 3D Point Cloud via Corruption Robustness. | ACM Multimedia | 2023 | Shengshan Hu, Wei Liu, Minghui Li, Yechao Zhang, Xiaogeng Liu, Xianlong Wang, Leo Yu Zhang, Junhui Hou | |
6 | Backdoor Attack on 3D Grey Image Segmentation. | ICDM | 2023 | Honghui Xu, Zhipeng Cai 0001, Zuobin Xiong, Wei Li 0059 | |
7 | Be Careful with Rotation: A Uniform Backdoor Pattern for 3D Shape. | CoRR | 2022 | Linkun Fan, Fazhi He, Qing Guo, Wei Tang, Xiaolin Hong, Bing Li 0010 | abs/2211.16192 |
8 | Detecting Backdoor Attacks against Point Cloud Classifiers. | ICASSP | 2022 | Zhen Xiang, David J. Miller 0001, Siheng Chen, Xi Li 0015, George Kesidis | |
9 | A physically realizable backdoor attack on 3D point cloud deep learning: work-in-progress. | CODES+ISSS | 2021 | Chen Bian, Wei Jiang 0016, Jinyu Zhan, Ziwei Song, Xiangyu Wen, Hong Lei | |
10 | Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds. | CoRR | 2021 | Guiyu Tian, Wenhao Jiang, Wei Liu 0005, Yadong Mu | abs/2105.04839 |
11 | Generative strategy based backdoor attacks to 3D point clouds: work-in-progress. | EMSOFT | 2021 | Xiangyu Wen, Wei Jiang 0016, Jinyu Zhan, Chen Bian, Ziwei Song | |
12 | A Backdoor Attack against 3D Point Cloud Classifiers. | ICCV | 2021 | Zhen Xiang, David J. Miller 0001, Siheng Chen, Xi Li 0015, George Kesidis | |
13 | PointBA: Towards Backdoor Attacks in 3D Point Cloud. | ICCV | 2021 | Xinke Li, Zhirui Chen, Yue Zhao, Zekun Tong, Yabang Zhao, Andrew Lim 0001, Joey Tianyi Zhou |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks. | AAAI | 2024 | Tran Huynh, Dang Nguyen, Tung Pham 0001, Anh Tran | |
2 | A clean-label graph backdoor attack method in node classification task. | CoRR | 2024 | Xiaogang Xing, Ming Xu, Yujing Bai, Dongdong Yang | abs/2401.00163 |
3 | Gradient-Based Clean Label Backdoor Attack to Graph Neural Networks. | ICISSP | 2024 | Ryo Meguro, Hiroya Kato, Shintaro Narisada, Seira Hidano, Kazuhide Fukushima, Takuo Suganuma, Masahiro Hiji | |
4 | PerCBA: Persistent Clean-label Backdoor Attacks on Semi-Supervised Graph Node Classification. | AISafety/SafeRL@IJCAI | 2023 | Xiao Yang, Gaolei Li, Chaofeng Zhang, Meng Han, Wu Yang | |
5 | Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning. | BMVC | 2023 | Kuofeng Gao, Jiawang Bai, Bin Chen 0011, Dongxian Wu, Shu-Tao Xia | |
6 | Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information. | CCS | 2023 | Yi Zeng 0005, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, Ruoxi Jia 0001 | |
7 | One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor Attack on Deep Learning. | CoRR | 2023 | Guohong Wang, Hua Ma, Yansong Gao, Alsharif Abuadbba, Zhi Zhang 0001, Wei Kang, Said F. Al-Sarawi, Gongxuan Zhang, Derek Abbott | abs/2309.04036 |
8 | Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger. | CoRR | 2023 | Yiming Li 0004, Mingyan Zhu, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin | abs/2312.04584 |
9 | Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers. | EMNLP | 2023 | Wencong You, Zayd Hammoudeh, Daniel Lowd | |
10 | CCBA: Code Poisoning-Based Clean-Label Covert Backdoor Attack Against DNNs. | ICDF2C | 2023 | Xubo Yang, Linsen Li, Cunqing Hua, Changhao Yao | |
11 | Persistent Clean-Label Backdoor on Graph-Based Semi-supervised Cybercrime Detection. | ICDF2C | 2023 | Xiao Yang, Gaolei Li, Meng Han | |
12 | CSSBA: A Clean Label Sample-Specific Backdoor Attack. | ICIP | 2023 | Zihan Shen, Wei Hou, Yun Li | |
13 | Instance-Agnostic and Practical Clean Label Backdoor Attack Method for Deep Learning Based Face Recognition Models. | IEEE Access | 2023 | Tae-Hoon Kim, SeokHwan Choi, Yoon-Ho Choi | 11 |
14 | An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks. | IEEE Trans. Circuits Syst. I Regul. Pap. | 2023 | Chaohui Xu, Wenye Liu, Yue Zheng, Si Wang, Chip-Hong Chang | 70 |
15 | A Temporal Chrominance Trigger for Clean-Label Backdoor Attack Against Anti-Spoof Rebroadcast Detection. | IEEE Trans. Dependable Secur. Comput. | 2023 | Wei Guo 0012, Benedetta Tondi, Mauro Barni | 20 |
16 | Not All Samples Are Born Equal: Towards Effective Clean-Label Backdoor Attacks. | Pattern Recognit. | 2023 | Yinghua Gao, Yiming Li 0004, Linghui Zhu, Dongxian Wu, Yong Jiang 0001, Shu-Tao Xia | 139 |
17 | Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems. | RepL4NLP@ACL | 2023 | Ashim Gupta, Amrith Krishna | |
18 | BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label. | ACM Multimedia | 2022 | Shengshan Hu, Ziqi Zhou, Yechao Zhang, Leo Yu Zhang, Yifeng Zheng, Yuanyuan He 0002, Hai Jin 0001 | |
19 | Poster: Clean-label Backdoor Attack on Graph Neural Networks. | CCS | 2022 | Jing Xu, Stjepan Picek | |
20 | Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers. | CoRR | 2022 | Nan Luo, Yuanzhang Li, Yajie Wang, Shangbo Wu, Yu-An Tan 0001, Quanxin Zhang | abs/2206.04881 |
21 | Kallima: A Clean-Label Framework for Textual Backdoor Attacks. | ESORICS | 2022 | Xiaoyi Chen, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen, Zhonghai Wu | |
22 | Triggerless Backdoor Attack for NLP Tasks with Clean Labels. | NAACL-HLT | 2022 | Leilei Gan, Jiwei Li 0001, Tianwei Zhang 0004, Xiaoya Li, Yuxian Meng, Fei Wu 0001, Yi Yang 0001, Shangwei Guo, Chun Fan | |
23 | Clean-label Backdoor Attack on Machine Learning-based Malware Detection Models and Countermeasures. | TrustCom | 2022 | Wanjia Zheng, Kazumasa Omote | |
24 | Clean-label Backdoor Attack against Deep Hashing based Retrieval. | CoRR | 2021 | Kuofeng Gao, Jiawang Bai, Bin Chen 0011, Dongxian Wu, Shu-Tao Xia | abs/2109.08868 |
25 | Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks. | INFOCOM | 2021 | Rui Ning, Jiang Li 0001, Chunsheng Xin, Hongyi Wu | |
26 | A Textual Clean-Label Backdoor Attack Strategy against Spam Detection. | SIN | 2021 | Fahri Anil Yerlikaya, Serif Bahtiyar | |
27 | Clean-Label Backdoor Attacks on Video Recognition Models. | CVPR | 2020 | Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey 0001, Jingjing Chen, Yu-Gang Jiang |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | SEER: Backdoor Detection for Vision-Language Models through Searching Target Text and Image Trigger Jointly. | AAAI | 2024 | Liuwan Zhu, Rui Ning, Jiang Li, Chunsheng Xin, Hongyi Wu | |
2 | Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space. | CoRR | 2024 | Zongru Wu, Zhuosheng Zhang 0001, Pengzhou Cheng, Gongshen Liu | abs/2402.12026 |
3 | BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models. | CoRR | 2024 | Zhen Xiang, Fengqing Jiang, Zidi Xiong, Bhaskar Ramasubramanian, Radha Poovendran, Bo Li | abs/2401.12242 |
4 | BadEdit: Backdooring large language models by model editing. | CoRR | 2024 | Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang 0004, Yang Liu | abs/2403.13355 |
5 | Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control. | CoRR | 2024 | Ka Ho Chow, Wenqi Wei, Lei Yu 0002 | abs/2401.01085 |
6 | OrderBkd: Textual backdoor attack through repositioning. | CoRR | 2024 | Irina Alekseevskaia, Konstantin Arkhipenko | abs/2402.07689 |
7 | Syntactic Ghost: An Imperceptible General-purpose Backdoor Attacks on Pre-trained Language Models. | CoRR | 2024 | Pengzhou Cheng, Wei Du, Zongru Wu, Fengwei Zhang, Libo Chen, Gongshen Liu | abs/2402.18945 |
8 | Test-Time Backdoor Attacks on Multimodal Large Language Models. | CoRR | 2024 | Dong Lu, Tianyu Pang, Chao Du, Qian Liu, Xianjun Yang, Min Lin | abs/2402.08577 |
9 | Universal Vulnerabilities in Large Language Models: In-context Learning Backdoor Attacks. | CoRR | 2024 | Shuai Zhao, Meihuizi Jia, Luu Anh Tuan, Jinming Wen | abs/2401.05949 |
10 | VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models. | CoRR | 2024 | Jiawei Liang, Siyuan Liang, Man Luo, Aishan Liu, Dongchen Han, Ee-Chien Chang, Xiaochun Cao | abs/2402.13851 |
11 | Defending against Backdoor Attacks in Natural Language Generation. | AAAI | 2023 | Xiaofei Sun, Xiaoya Li, Yuxian Meng, Xiang Ao 0001, Lingjuan Lyu, Jiwei Li 0001, Tianwei Zhang 0004 | |
12 | BITE: Textual Backdoor Attacks with Iterative Trigger Injection. | ACL | 2023 | Jun Yan 0012, Vansh Gupta, Xiang Ren 0001 | |
13 | Defending against Insertion-based Textual Backdoor Attacks via Attribution. | ACL | 2023 | Jiazhao Li, Zhuofeng Wu 0001, Wei Ping, Chaowei Xiao, V. G. Vinod Vydiswaran | |
14 | Maximum Entropy Loss, the Silver Bullet Targeting Backdoor Attacks in Pre-trained Language Models. | ACL | 2023 | Zhengxiao Liu, Bowen Shen, Zheng Lin, Fali Wang, Weiping Wang | |
15 | NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models. | ACL | 2023 | Kai Mei, Zheng Li 0023, Zhenting Wang, Yang Zhang 0016, Shiqing Ma | |
16 | Analyzing And Editing Inner Mechanisms Of Backdoored Language Models. | CoRR | 2023 | Max Lamparth, Anka Reuel | abs/2302.12461 |
17 | BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing. | CoRR | 2023 | Jiali Wei, Ming Fan 0002, Wenjing Jiao, Wuxia Jin, Ting Liu 0002 | abs/2301.10412 |
18 | Backdoor Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment. | CoRR | 2023 | Haoran Wang, Kai Shu | abs/2311.09433 |
19 | Backdoor Adjustment of Confounding by Provenance for Robust Text Classification of Multi-institutional Clinical Notes. | CoRR | 2023 | Xiruo Ding, Zhecheng Sheng, Meliha Yetisgen, Serguei Pakhomov, Trevor Cohen | abs/2310.02451 |
20 | Backdoor Attacks for In-Context Learning with Language Models. | CoRR | 2023 | Nikhil Kandpal, Matthew Jagielski, Florian Tramèr, Nicholas Carlini | abs/2307.14692 |
21 | Backdoor Attacks with Input-unique Triggers in NLP. | CoRR | 2023 | Xukun Zhou, Jiwei Li 0001, Tianwei Zhang 0004, Lingjuan Lyu, Muqiao Yang, Jun He | abs/2303.14325 |
22 | Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions. | CoRR | 2023 | abs/2302.06801 | |
23 | BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT. | CoRR | 2023 | Jiawen Shi, Yixin Liu, Pan Zhou, Lichao Sun 0001 | abs/2304.12298 |
24 | ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger. | CoRR | 2023 | Jiazhao Li, Yijin Yang, Zhuofeng Wu 0001, V. G. Vinod Vydiswaran, Chaowei Xiao | abs/2304.14475 |
25 | Composite Backdoor Attacks Against Large Language Models. | CoRR | 2023 | Hai Huang, Zhengyu Zhao 0001, Michael Backes 0001, Yun Shen, Yang Zhang 0016 | abs/2310.07676 |
26 | Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models. | CoRR | 2023 | Jiashu Xu, Mingyu Derek Ma, Fei Wang 0060, Chaowei Xiao, Muhao Chen | abs/2305.14710 |
27 | PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models. | CoRR | 2023 | Hongwei Yao, Jian Lou 0001, Zhan Qin | abs/2310.12439 |
28 | RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks. | CoRR | 2023 | abs/2302.09420 | |
29 | Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections. | CoRR | 2023 | Yuanpu Cao, Bochuan Cao, Jinghui Chen | abs/2312.00027 |
30 | TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT4. | CoRR | 2023 | Zihao Tan, Qingliang Chen, Yongjian Huang, Chen Liang | abs/2311.17429 |
31 | Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations. | CoRR | 2023 | Wenjie Mo, Jiashu Xu, Qin Liu, Jiongxiao Wang, Jun Yan, Chaowei Xiao, Muhao Chen | abs/2311.09763 |
32 | TextGuard: Provable Defense against Backdoor Attacks on Text Classification. | CoRR | 2023 | Hengzhi Pei, Jinyuan Jia, Wenbo Guo 0002, Bo Li 0026, Dawn Song | abs/2311.11225 |
33 | UOR: Universal Backdoor Attacks on Pre-trained Language Models. | CoRR | 2023 | Wei Du, Peixuan Li, Boqun Li, Haodong Zhao, Gongshen Liu | abs/2305.09574 |
34 | Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models. | EMNLP | 2023 | Shuai Zhao, Jinming Wen, Anh Tuan Luu, Junbo Zhao, Jie Fu | |
35 | A Textual Backdoor Defense Method Based on Deep Feature Classification. | Entropy | 2023 | Kun Shao, Jun-an Yang, Pengjiang Hu, Xiaoshuai Li | 25 |
36 | NCL: Textual Backdoor Defense Using Noise-Augmented Contrastive Learning. | ICASSP | 2023 | Shengfang Zhai, Qingni Shen, Xiaoyi Chen, Weilong Wang, Cong Li, Yuejian Fang, Zhonghai Wu | |
37 | MIC: An Effective Defense Against Word-Level Textual Backdoor Attacks. | ICONIP | 2023 | Shufan Yang, Qianmu Li, Zhichao Lian, Pengchuan Wang, Jun Hou 0002 | |
38 | Punctuation Matters! Stealthy Backdoor Attack for Language Models. | NLPCC | 2023 | Xuan Sheng, Zhicheng Li, Zhaoyang Han, Xiangmao Chang, Piji Li | |
39 | Robust Contrastive Language-Image Pretraining against Data Poisoning and Backdoor Attacks. | NeurIPS | 2023 | Wenhan Yang, Jingdong Gao, Baharan Mirzasoleiman | |
40 | Setting the Trap: Capturing and Defeating Backdoors in Pretrained Language Models through Honeypots. | NeurIPS | 2023 | Ruixiang (Ryan) Tang, Jiayi Yuan, Yiming Li, Zirui Liu, Rui Chen, Xia Hu 0001 | |
41 | Training-free Lexical Backdoor Attacks on Language Models. | WWW | 2023 | Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen | |
42 | Where to Attack: A Dynamic Locator Model for Backdoor Attack in Text Classifications. | COLING | 2022 | Heng-Yang Lu, Chenyou Fan, Jun Yang 0038, Cong Hu, Wei Fang 0001, Xiao-Jun Wu 0001 | |
43 | Backdoor Attack against NLP models with Robustness-Aware Perturbation defense. | CoRR | 2022 | Shaik Mohammed Maqsood, Viveros Manuela Ceron, Addluri GowthamKrishna | abs/2204.05758 |
44 | Rethink Stealthy Backdoor Attacks in Natural Language Processing. | CoRR | 2022 | Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi 0001 | abs/2201.02993 |
45 | Textual Backdoor Attacks with Iterative Trigger Injection. | CoRR | 2022 | Jun Yan 0012, Vansh Gupta, Xiang Ren 0001 | abs/2205.12700 |
46 | The triggers that open the NLP model backdoors are hidden in the adversarial samples. | Comput. Secur. | 2022 | Kun Shao, Yu Zhang, Junan Yang, Xiaoshuai Li, Hui Liu | 118 |
47 | Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks. | EMNLP | 2022 | Sishuo Chen, Wenkai Yang, Zhiyuan Zhang 0001, Xiaohan Bi, Xu Sun 0001 | |
48 | Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models. | EMNLP | 2022 | Zhiyuan Zhang 0001, Lingjuan Lyu, Xingjun Ma, Chenguang Wang 0001, Xu Sun 0001 | |
49 | Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks. | EMNLP | 2022 | Yangyi Chen, Fanchao Qi, Hongcheng Gao, Zhiyuan Liu 0001, Maosong Sun 0001 | |
50 | WeDef: Weakly Supervised Backdoor Defense for Text Classification. | EMNLP | 2022 | Lesheng Jin, Zihan Wang 0001, Jingbo Shang | |
51 | BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models. | ICLR | 2022 | Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang 0004, Jiwei Li 0001, Chun Fan | |
52 | Constrained Optimization with Dynamic Bound-scaling for Effective NLP Backdoor Defense. | ICML | 2022 | Guangyu Shen, Yingqi Liu, Guanhong Tao 0001, Qiuling Xu, Zhuo Zhang 0002, Shengwei An, Shiqing Ma, Xiangyu Zhang 0001 | |
53 | I Know Your Triggers: Defending Against Textual Backdoor Attacks with Benign Backdoor Augmentation. | MILCOM | 2022 | Yue Gao 0011, Jack W. Stokes, Manoj Ajith Prasad, Andrew T. Marshall, Kassem Fawaz, Emre Kiciman | |
54 | Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models. | NeurIPS | 2022 | Biru Zhu, Yujia Qin, Ganqu Cui, Yangyi Chen, Weilin Zhao, Chong Fu, Yangdong Deng, Zhiyuan Liu 0001, Jingang Wang, Wei Wu, Maosong Sun 0001, Ming Gu 0001 | |
55 | Hidden Trigger Backdoor Attack on NLP Models via Linguistic Style Manipulation. | USENIX Security Symposium | 2022 | Xudong Pan, Mi Zhang 0001, Beina Sheng, Jiaming Zhu, Min Yang 0002 | |
56 | Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger. | ACL/IJCNLP | 2021 | Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu 0001, Yasheng Wang, Maosong Sun 0001 | |
57 | Rethinking Stealthiness of Backdoor Attack against NLP Models. | ACL/IJCNLP | 2021 | Wenkai Yang, Yankai Lin, Peng Li 0030, Jie Zhou 0016, Xu Sun 0001 | |
58 | Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution. | ACL/IJCNLP | 2021 | Fanchao Qi, Yuan Yao, Sophia Xu, Zhiyuan Liu 0001, Maosong Sun 0001 | |
59 | BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements. | ACSAC | 2021 | Xiaoyi Chen, Ahmed Salem 0001, Dingfan Chen, Michael Backes 0001, Shiqing Ma, Qingni Shen, Zhonghai Wu, Yang Zhang 0016 | |
60 | Hidden Backdoors in Human-Centric Language Models. | CCS | 2021 | Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, Jialiang Lu | |
61 | BDDR: An Effective Defense Against Textual Backdoor Attacks. | Comput. Secur. | 2021 | Kun Shao, Junan Yang, Yang Ai, Hui Liu, Yu Zhang | 110 |
62 | BFClass: A Backdoor-free Text Classification Framework. | EMNLP | 2021 | Zichao Li 0002, Dheeraj Mekala, Chengyu Dong, Jingbo Shang | |
63 | ONION: A Simple and Effective Defense Against Textual Backdoor Attacks. | EMNLP | 2021 | Fanchao Qi, Yangyi Chen, Mukai Li, Yuan Yao, Zhiyuan Liu 0001, Maosong Sun 0001 | |
64 | RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models. | EMNLP | 2021 | Wenkai Yang, Yankai Lin, Peng Li 0030, Jie Zhou 0016, Xu Sun 0001 | |
65 | Mitigating backdoor attacks in LSTM-based text classification systems by Backdoor Keyword Identification. | Neurocomputing | 2021 | Chuanshuai Chen, Jiazhu Dai | 452 |
66 | BadNL: Backdoor Attacks Against NLP Models. | CoRR | 2020 | Xiaoyi Chen, Ahmed Salem 0001, Michael Backes 0001, Shiqing Ma, Yang Zhang 0016 | abs/2006.01043 |
67 | A Backdoor Attack Against LSTM-Based Text Classification Systems. | IEEE Access | 2019 | Jiazhu Dai, Chuanshuai Chen, Yufeng Li 0002 | 7 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | The Silent Manipulator: A Practical and Inaudible Backdoor Attack against Speech Recognition Systems. | ACM Multimedia | 2023 | Zhicong Zheng, Xinfeng Li, Chen Yan 0001, Xiaoyu Ji 0001, Wenyuan Xu 0001 | |
2 | Joint Energy-Based Model for Robust Speech Classification System Against Dirty-Label Backdoor Poisoning Attacks. | ASRU | 2023 | Martin Sustek, Sonal Joshi, Henry Li, Thomas Thebaud, Jesús Villalba 0001, Sanjeev Khudanpur, Najim Dehak | |
3 | BadSQA: Stealthy Backdoor Attacks Using Presence Events as Triggers in Non-Intrusive Speech Quality Assessment. | CoRR | 2023 | Ying Ren, Kailai Shen, Zhe Ye, Diqun Yan | abs/2309.01480 |
4 | Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion. | CoRR | 2023 | Zhe Ye, Terui Mao, Li Dong 0006, Diqun Yan | abs/2306.15875 |
5 | FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited Knowledge. | CoRR | 2023 | Jiahe Lan, Jie Wang, Baochen Yan, Zheng Yan 0002, Elisa Bertino | abs/2312.09665 |
6 | Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound. | CoRR | 2023 | Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao 0002, Stefanos Koffas, Yiming Li | abs/2307.08208 |
7 | Going in Style: Audio Backdoors Through Stylistic Transformations. | ICASSP | 2023 | Stefanos Koffas, Luca Pajola, Stjepan Picek, Mauro Conti | |
8 | Data Poisoning and Backdoor Attacks on Audio Intelligence Systems. | IEEE Commun. Mag. | 2023 | Yunjie Ge, Qian Wang, Jiayuan Yu, Chao Shen 0001, Qi Li 0002 | 61 |
9 | Opportunistic Backdoor Attacks: Exploring Human-imperceptible Vulnerabilities on Speech Recognition Systems. | ACM Multimedia | 2022 | Qiang Liu 0004, Tongqing Zhou, Zhiping Cai, Yonghao Tang | |
10 | VSVC: Backdoor attack against Keyword Spotting based on Voiceprint Selection and Voice Conversion. | CoRR | 2022 | Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao 0002, Shunhui Ji | abs/2212.10103 |
11 | Backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography. | ICPR | 2022 | Peng Liu 0044, Shuyi Zhang, Chuanjian Yao, Wenzhe Ye, Xianxian Li | |
12 | Backdoor Defence for Voice Print Recognition Model Based on Speech Enhancement and Weight Pruning. | IEEE Access | 2022 | Jiawei Zhu, Lin Chen, Dongwei Xu, Wenhong Zhao | 10 |
13 | Natural Backdoor Attacks on Speech Recognition Models. | ML4CS | 2022 | Jinwen Xin, Xixiang Lyu, Jing Ma | |
14 | Audio-domain position-independent backdoor attack via unnoticeable triggers. | MobiCom | 2022 | Cong Shi 0004, Tianfang Zhang, Zhuohang Li, Huy Phan, Tianming Zhao 0001, Yan Wang 0003, Jian Liu 0001, Bo Yuan 0001, Yingying Chen 0001 | |
15 | Inaudible Manipulation of Voice-Enabled Devices Through BackDoor Using Robust Adversarial Audio Attacks: Invited Paper. | WiseML@WiSec | 2021 | Morriel Kasher, Michael Zhao, Aryeh Greenberg, Devin Gulati, Silvija Kokalj-Filipovic, Predrag Spasojevic | |
16 | Detecting acoustic backdoor transmission of inaudible messages using deep learning. | WiseML@WiSec | 2020 | Silvija Kokalj-Filipovic, Morriel Kasher, Michael Zhao, Predrag Spasojevic | |
17 | Adversarial Audio: A New Information Hiding Method and Backdoor for DNN-based Speech Recognition Models. | CoRR | 2019 | Yehao Kong, Jiliang Zhang 0002 | abs/1904.03829 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | End-to-End Anti-Backdoor Learning on Images and Time Series. | CoRR | 2024 | Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, Yige Li, James Bailey 0001 | abs/2401.03215 |
2 | Backdoor Attacks on Time Series: A Generative Approach. | CoRR | 2022 | Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey 0001 | abs/2211.07915 |
3 | Towards Backdoor Attack on Deep Learning based Time Series Classification. | ICDE | 2022 | Daizong Ding, Mi Zhang 0001, Yuanmin Huang 0001, Xudong Pan, Fuli Feng, Erling Jiang, Min Yang 0002 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | PECAN: A Deterministic Certified Defense Against Backdoor Attacks. | CoRR | 2023 | Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni | abs/2301.11824 |
2 | CBD: A Certified Backdoor Detector Based on Local Dominant Probability. | NeurIPS | 2023 | Zhen Xiang, Zidi Xiong, Bo Li | |
3 | Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks. | AAAI | 2022 | Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong | |
4 | CRAB: Certified Patch Robustness Against Poisoning-Based Backdoor Attacks. | ICIP | 2022 | Huxiao Ji, Jie Li 0002, Chentao Wu | |
5 | Backdoor Attacks on Network Certification via Data Poisoning. | CoRR | 2021 | Tobias Lorenz 0002, Marta Kwiatkowska, Mario Fritz | abs/2108.11299 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Toward a Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures. | IEEE Trans. Inf. Forensics Secur. | 2024 | Huming Qiu, Hua Ma, Zhi Zhang 0001, Alsharif Abuadbba, Wei Kang 0004, Anmin Fu, Yansong Gao | 19 |
2 | Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency. | CVPR | 2023 | Xiaogeng Liu, Minghui Li, Haoyu Wang, Shengshan Hu, Dengpan Ye, Hai Jin 0001, Libing Wu, Chaowei Xiao | |
3 | Hyperparameter Search Is All You Need For Training-Agnostic Backdoor Robustness. | CoRR | 2023 | Eugene Bagdasaryan, Vitaly Shmatikov | abs/2302.04977 |
4 | RAB: Provable Robustness Against Backdoor Attacks. | SP | 2023 | Maurice Weber, Xiaojun Xu, Bojan Karlas, Ce Zhang 0001, Bo Li 0026 | |
5 | Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures. | CoRR | 2022 | Huming Qiu, Hua Ma, Zhi Zhang 0001, Alsharif Abuadbba, Wei Kang, Anmin Fu, Yansong Gao | abs/2204.06273 |
6 | On Certifying Robustness against Backdoor Attacks via Randomized Smoothing. | CoRR | 2020 | Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong | abs/2002.11750 |
7 | On the Trade-off between Adversarial and Backdoor Robustness. | NeurIPS | 2020 | Cheng-Hsin Weng, Yan-Ting Lee, Shan-Hung Wu |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Chronic Poisoning: Backdoor Attack against Split Learning. | AAAI | 2024 | Fangchao Yu, Bo Zeng, Kai Zhao, Zhi Pang, Lina Wang | |
2 | How to backdoor split learning. | Neural Networks | 2023 | Fangchao Yu, Lina Wang, Bo Zeng, Kai Zhao, Zhi Pang, Tian Wu | 168 |
3 | On Feasibility of Server-side Backdoor Attacks on Split Learning. | SP | 2023 | Behrad Tajalli, Oguzhan Ersoy, Stjepan Picek | |
4 | VILLAIN: Backdoor Attacks Against Vertical Split Learning. | USENIX Security Symposium | 2023 | Yijie Bai, Yanjiao Chen, Hanlei Zhang, Wenyuan Xu 0001, Haiqin Weng, Dou Goodman |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Moiré Backdoor Attack (MBA): A Novel Trigger for Pedestrian Detectors in the Physical World. | ACM Multimedia | 2023 | Hui Wei 0004, Hanxun Yu, Kewei Zhang, Zhixiang Wang, Jianke Zhu, Zheng Wang 0007 | |
2 | Physical Invisible Backdoor Based on Camera Imaging. | ACM Multimedia | 2023 | Yusheng Guo, Nan Zhong, Zhenxing Qian, Xinpeng Zhang 0001 | |
3 | Physical Backdoor Trigger Activation of Autonomous Vehicle Using Reachability Analysis. | CDC | 2023 | Wenqing Li, Yue Wang 0055, Muhammad Shafique 0001, Saif Eddin Jabari | |
4 | Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models. | CoRR | 2023 | Sze Jue Yang, Chinh D. La, Quang H. Nguyen, Eugene Bagdasaryan, Kok-Seng Wong, Anh Tuan Tran, Chee Seng Chan, Khoa D. Doan | abs/2312.03419 |
5 | Backdoor Learning on Siamese Networks Using Physical Triggers: FaceNet as a Case Study. | ICDF2C | 2023 | Zeshan Pang, Yuyuan Sun, Shasha Guo, Yuliang Lu | |
6 | Kaleidoscope: Physical Backdoor Attacks Against Deep Neural Networks With RGB Filters. | IEEE Trans. Dependable Secur. Comput. | 2023 | Xueluan Gong, Ziyao Wang, Yanjiao Chen, Meng Xue, Qian Wang 0002, Chao Shen 0001 | 20 |
7 | Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving. | ACM Multimedia | 2022 | Xingshuo Han, Guowen Xu, Yuan Zhou 0005, Xuehuan Yang, Jiwei Li 0001, Tianwei Zhang 0004 | |
8 | Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World. | CoRR | 2022 | Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang 0001, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Surya Nepal, Derek Abbott | abs/2201.08619 |
9 | PTB: Robust physical backdoor attacks against deep neural networks in real world. | Comput. Secur. | 2022 | Mingfu Xue, Can He, Yinghao Wu, Shichang Sun, Yushu Zhang, Jian Wang 0038, Weiqiang Liu 0001 | 118 |
10 | Finding Naturally Occurring Physical Backdoors in Image Datasets. | NeurIPS | 2022 | Emily Wenger, Roma Bhattacharjee, Arjun Nitin Bhagoji, Josephine Passananti, Emilio Andere, Heather Zheng, Ben Y. Zhao | |
11 | Backdoor Attacks Against Deep Learning Systems in the Physical World. | CVPR | 2021 | Emily Wenger, Josephine Passananti, Arjun Nitin Bhagoji, Yuanshun Yao, Haitao Zheng 0001, Ben Y. Zhao | |
12 | Backdoor Attack in the Physical World. | CoRR | 2021 | Yiming Li 0004, Tongqing Zhai, Yong Jiang 0001, Zhifeng Li 0001, Shu-Tao Xia | abs/2104.02361 |
13 | Robust Backdoor Attacks against Deep Neural Networks in Real Physical World. | TrustCom | 2021 | Mingfu Xue, Can He, Shichang Sun, Jian Wang 0038, Weiqiang Liu 0001 | |
14 | Backdoor Attacks on Facial Recognition in the Physical World. | CoRR | 2020 | Emily Wenger, Josephine Passananti, Yuanshun Yao, Haitao Zheng 0001, Ben Y. Zhao | abs/2006.14580 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Temporal-Distributed Backdoor Attack against Video Based Action Recognition. | AAAI | 2024 | Xi Li, Songhe Wang, Ruiquan Huang, Mahanth Gowda, George Kesidis | |
2 | SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection. | CoRR | 2024 | Qiannan Wang, Changchun Yin, Liming Fang 0001, Lu Zhou 0002, Zhe Liu 0001, Run Wang, Chenhao Lin | abs/2401.00137 |
3 | Backdoor Attack against Object Detection with Clean Annotation. | CoRR | 2023 | Yize Cheng, Wenbin Hu, Minhao Cheng | abs/2307.10487 |
4 | Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition. | CoRR | 2023 | Hasan Abed Al Kader Hammoud, Shuming Liu, Mohammad Alkhrashi, Fahad Albalawi, Bernard Ghanem | abs/2301.00986 |
5 | Robust Backdoor Attacks on Object Detection in Real World. | CoRR | 2023 | Yaguan Qian, Boyuan Ji, Shuke He, Shenhui Huang, Xiang Ling, Bin Wang, Wei Wang | abs/2309.08953 |
6 | Untargeted Backdoor Attack Against Object Detection. | ICASSP | 2023 | Chengxiao Luo, Yiming Li 0004, Yong Jiang 0001, Shu-Tao Xia | |
7 | TransCAB: Transferable Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World. | SRDS | 2023 | Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Surya Nepal, Derek Abbott | |
8 | MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World. | CoRR | 2022 | Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang 0001, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Surya Nepal, Derek Abbott | abs/2209.02339 |
9 | BadDet: Backdoor Attacks on Object Detection. | ECCV Workshops | 2022 | Shih-Han Chan, Yinpeng Dong, Jun Zhu 0001, Xiaolu Zhang, Jun Zhou 0011 | |
10 | Towards Backdoor Attacks against LiDAR Object Detection in Autonomous Driving. | SenSys | 2022 | Yan Zhang, Yi Zhu, Zihao Liu, Chenglin Miao, Foad Hajiaghajani, Lu Su, Chunming Qiao | |
11 | Luminance-based video backdoor attack against anti-spoofing rebroadcast detection. | MMSP | 2019 | Abhir Bhalerao, Kassem Kallas, Benedetta Tondi, Mauro Barni |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Influencer Backdoor Attack on Semantic Segmentation. | CoRR | 2023 | Haoheng Lan, Jindong Gu, Philip H. S. Torr, Hengshuang Zhao | abs/2303.12054 |
2 | Object-free backdoor attack and defense on semantic segmentation. | Comput. Secur. | 2023 | Jiaoze Mao, Yaguan Qian, Jianchang Huang, Zejie Lian, Renhui Tao, Bin Wang 0062, Wei Wang 0012, Tengteng Yao | 132 |
3 | Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against Backdoor Attacks. | Remote. Sens. | 2023 | Zhen Wang 0020, Buhong Wang, Chuanlei Zhang, Yaohui Liu 0001, Jianxin Guo | 15 |
4 | Hidden Backdoor Attack against Semantic Segmentation Models. | CoRR | 2021 | Yiming Li 0004, Yanjie Li, Yalei Lv, Yong Jiang 0001, Shu-Tao Xia | abs/2103.04038 |
5 | Segmentation Based Backdoor Attack Detection. | ICMLC | 2020 | Natasha Kees, Yaxuan Wang, Yiling Jiang, Fang Lue, Patrick P. K. Chan |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Detection of backdoor attacks using targeted universal adversarial perturbations for deep neural networks. | J. Syst. Softw. | 2024 | Yubin Qu, Song Huang, Xiang Chen 0005, Xingya Wang, Yongming Yao | 207 |
2 | Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks. | CoRR | 2023 | Xiaoyun Xu, Oguzhan Ersoy, Stjepan Picek | abs/2302.00747 |
3 | Detecting backdoor in deep neural networks via intentional adversarial perturbations. | Inf. Sci. | 2023 | Mingfu Xue, Yinghao Wu, Zhiyu Wu, Yushu Zhang, Jian Wang 0038, Weiqiang Liu 0001 | 634 |
4 | DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints. | CVPR | 2022 | Zhendong Zhao, Xiaojun Chen 0004, Yuexin Xuan, Ye Dong, Dakui Wang, Kaitai Liang | |
5 | Adaptive Perturbation Generation for Multiple Backdoors Detection. | CoRR | 2022 | Yuhang Wang, Huafeng Shi, Rui Min, Ruijia Wu, Siyuan Liang, Yichao Wu, Ding Liang, Aishan Liu | abs/2209.05244 |
6 | Dispersed Pixel Perturbation-Based Imperceptible Backdoor Trigger for Image Classifier Models. | IEEE Trans. Inf. Forensics Secur. | 2022 | Yulong Wang, Minghui Zhao, Shenghong Li 0002, Xin Yuan 0004, Wei Ni 0001 | 17 |
7 | DeHiB: Deep Hidden Backdoor Attack on Semi-supervised Learning via Adversarial Perturbation. | AAAI | 2021 | Zhicong Yan, Gaolei Li, Yuan Tian 0017, Jun Wu 0001, Shenghong Li 0001, Mingzhe Chen, H. Vincent Poor | |
8 | TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation. | CoRR | 2021 | Todd Huster, Emmanuel Ekwedike | abs/2103.10274 |
9 | Can Adversarial Weight Perturbations Inject Neural Backdoors. | CIKM | 2020 | Siddhant Garg, Adarsh Kumar 0001, Vibhor Goel, Yingyu Liang | |
10 | Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation. | CODASPY | 2020 | Haoti Zhong, Cong Liao, Anna Cinzia Squicciarini, Sencun Zhu, David J. Miller 0001 | |
11 | Can Adversarial Weight Perturbations Inject Neural Backdoors? | CoRR | 2020 | Siddhant Garg, Adarsh Kumar 0001, Vibhor Goel, Yingyu Liang | abs/2008.01761 |
12 | Revealing Backdoors, Post-Training, in DNN Classifiers via Novel Inference on Optimized Perturbations Inducing Group Misclassification. | ICASSP | 2020 | Zhen Xiang, David J. Miller 0001, George Kesidis | |
13 | Backdooring Convolutional Neural Networks via Targeted Weight Perturbations. | IJCB | 2020 | Jacob Dumford, Walter J. Scheirer |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Imperceptible and multi-channel backdoor attack. | Appl. Intell. | 2024 | Mingfu Xue, Shifeng Ni, Yinghao Wu, Yushu Zhang, Weiqiang Liu 0010 | 54 |
2 | Impart: An Imperceptible and Effective Label-Specific Backdoor Attack. | CoRR | 2024 | Jingke Zhao, Zan Wang, Yongwei Wang, Lanjun Wang | abs/2403.13017 |
3 | Invisible Backdoor Attack Through Singular Value Decomposition. | CoRR | 2024 | Wenmin Chen, Xiaowei Xu | abs/2403.13018 |
4 | BadCM: Invisible Backdoor Attack Against Cross-Modal Learning. | IEEE Trans. Image Process. | 2024 | Zheng Zhang 0006, Xu Yuan 0007, Lei Zhu 0002, Jingkuan Song, Liqiang Nie | 33 |
5 | Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger. | IEEE Trans. Ind. Informatics | 2024 | Mingfu Xue, Yinghao Wu, Shifeng Ni, Leo Yu Zhang, Yushu Zhang, Weiqiang Liu 0001 | 20 |
6 | Invisible Backdoor Attack With Dynamic Triggers Against Person Re-Identification. | IEEE Trans. Inf. Forensics Secur. | 2024 | Wenli Sun, Xinyang Jiang, Shuguang Dou, Dongsheng Li 0002, Duoqian Miao, Cheng Deng, Cairong Zhao | 19 |
7 | Invisible backdoor learning in regional transform domain. | Neural Comput. Appl. | 2024 | Yuyuan Sun, Yuliang Lu, Xuehu Yan, Xuan Wang | 36 |
8 | SilentTrig: An imperceptible backdoor attack against speaker identification with hidden triggers. | Pattern Recognit. Lett. | 2024 | Yu Tang, Lijuan Sun, Xiaolong Xu 0002 | 177 |
9 | Backdoor Attack with Sparse and Invisible Trigger. | CoRR | 2023 | Yinghua Gao, Yiming Li 0004, Xueluan Gong, Shu-Tao Xia, Qian Wang | abs/2306.06209 |
10 | Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder. | CoRR | 2023 | Jiliang Zhang 0002, Jing Xu, Zhi Zhang 0001, Yansong Gao | abs/2302.04457 |
11 | Invisible Threats: Backdoor Attack in OCR Systems. | CoRR | 2023 | Mauro Conti, Nicola Farronato, Stefanos Koffas, Luca Pajola, Stjepan Picek | abs/2310.08259 |
12 | SATBA: An Invisible Backdoor Attack Based On Spatial Attention. | CoRR | 2023 | Huasong Zhou, Zhenyu Wang, Xiaowei Xu | abs/2302.13056 |
13 | Towards Invisible Backdoor Attacks in the Frequency Domain against Deep Neural Networks. | CoRR | 2023 | Xinrui Liu, Yajie Wang, Yu-An Tan 0001, Kefan Qiu, Yuanzhang Li | abs/2305.10596 |
14 | DIHBA: Dynamic, invisible and high attack success rate boundary backdoor attack with low poison ratio. | Comput. Secur. | 2023 | Binhao Ma, Can Zhao, Dejun Wang, Bo Meng | 129 |
15 | Invisible Backdoor Attacks Using Data Poisoning in Frequency Domain. | ECAI | 2023 | Chang Yue, Peizhuo Lv, Ruigang Liang, Kai Chen 0012 | |
16 | Invisible Encoded Backdoor attack on DNNs using Conditional GAN. | ICCE | 2023 | Iram Arshad, Yuansong Qiao, Brian Lee 0001, Yuhang Ye | |
17 | IMTM: Invisible Multi-trigger Multimodal Backdoor Attack. | NLPCC | 2023 | Zhicheng Li, Piji Li, Xuan Sheng, Changchun Yin, Lu Zhou 0002 | |
18 | Training Data Leakage via Imperceptible Backdoor Attack. | SSCI | 2023 | Xiangkai Yang, Wenjian Luo, Qi Zhou, Zhijian Chen | |
19 | FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature Repair. | CoRR | 2022 | Hui Xia 0001, Xiugui Yang, Xiangyun Qian, Rui Zhang 0050 | abs/2207.12863 |
20 | False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger. | CoRR | 2022 | Muhammad Umer, Robi Polikar | abs/2202.04479 |
21 | ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks. | CoRR | 2022 | Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross J. Anderson, Robert D. Mullins | abs/2210.00108 |
22 | Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks. | CoRR | 2022 | Mingfu Xue, Shifeng Ni, Yinghao Wu, Yushu Zhang, Jian Wang 0038, Weiqiang Liu 0001 | abs/2201.13164 |
23 | Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain. | CoRR | 2022 | Chang Yue, Peizhuo Lv, Ruigang Liang, Kai Chen 0012 | abs/2207.04209 |
24 | Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models. | CoRR | 2022 | Lukas Struppek, Dominik Hintersdorf, Kristian Kersting | abs/2211.02408 |
25 | An Invisible Black-Box Backdoor Attack Through Frequency Domain. | ECCV | 2022 | Tong Wang, Yuan Yao 0001, Feng Xu 0007, Shengwei An, Hanghang Tong, Ting Wang | |
26 | RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN. | ECCV | 2022 | Huy Phan, Cong Shi 0004, Yi Xie 0001, Tianfang Zhang, Zhuohang Li, Tianming Zhao 0001, Jian Liu 0001, Yan Wang 0003, Yingying Chen 0001, Bo Yuan 0001 | |
27 | Invisible and Efficient Backdoor Attacks for Compressed Deep Neural Networks. | ICASSP | 2022 | Huy Phan, Yi Xie 0001, Jian Liu 0001, Yingying Chen 0001, Bo Yuan 0001 | |
28 | Detecting and Mitigating Backdoor Attacks with Dynamic and Invisible Triggers. | ICONIP | 2022 | Zhibin Zheng, Zhongyun Hua, Leo Yu Zhang | |
29 | Poison Ink: Robust and Invisible Backdoor Attack. | IEEE Trans. Image Process. | 2022 | Jie Zhang 0073, Dongdong Chen 0001, Qidong Huang, Jing Liao 0001, Weiming Zhang 0001, Huamin Feng, Gang Hua 0001, Nenghai Yu | 31 |
30 | Imperceptible Backdoor Attack: From Input Space to Feature Representation. | IJCAI | 2022 | Nan Zhong, Zhenxing Qian, Xinpeng Zhang 0001 | |
31 | Low-Poisoning Rate Invisible Backdoor Attack Based on Important Neurons. | WASA | 2022 | Xiugui Yang, Xiangyun Qian, Rui Zhang 0050, Ning Huang, Hui Xia 0001 | |
32 | Reverse engineering imperceptible backdoor attacks on deep neural networks for detection and training set cleansing. | Comput. Secur. | 2021 | Zhen Xiang, David J. Miller 0001, George Kesidis | 106 |
33 | L-Red: Efficient Post-Training Detection of Imperceptible Backdoor Attacks Without Access to the Training Set. | ICASSP | 2021 | Zhen Xiang, David J. Miller 0001, George Kesidis | |
34 | Invisible Backdoor Attack with Sample-Specific Triggers. | ICCV | 2021 | Yuezun Li, Yiming Li 0004, Baoyuan Wu, Longkang Li, Ran He 0001, Siwei Lyu | |
35 | LIRA: Learnable, Imperceptible and Robust Backdoor Attacks. | ICCV | 2021 | Khoa D. Doan, Yingjie Lao, Weijie Zhao 0001, Ping Li 0001 | |
36 | WaNet - Imperceptible Warping-based Backdoor Attack. | ICLR | 2021 | Tuan Anh Nguyen, Anh Tuan Tran 0001 | |
37 | Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization. | IEEE Trans. Dependable Secur. Comput. | 2021 | Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang 0001 | 18 |
38 | Backdoor Attack with Imperceptible Input and Latent Modification. | NeurIPS | 2021 | Khoa D. Doan, Yingjie Lao, Ping Li 0001 | |
39 | Invisible Backdoor Attacks Against Deep Neural Networks. | CoRR | 2019 | Shaofeng Li, Benjamin Zi Hao Zhao, Jiahao Yu, Minhui Xue, Dali Kaafar, Haojin Zhu | abs/1909.02742 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | A backdoor attack against link prediction tasks with graph neural networks. | CoRR | 2024 | Jiazhu Dai, Haoyu Sun | abs/2401.02663 |
2 | REPQC: Reverse Engineering and Backdooring Hardware Accelerators for Post-quantum Cryptography. | CoRR | 2024 | Samuel Pagliarini, Aikata, Malik Imran, Sujoy Sinha Roy | abs/2403.09352 |
3 | Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs. | CoRR | 2024 | Jane Downer, Ren Wang, Binghui Wang | abs/2403.18136 |
4 | Effective Backdoor Attack on Graph Neural Networks in Spectral Domain. | IEEE Internet Things J. | 2024 | Xiangyu Zhao, Hanzhou Wu, Xinpeng Zhang 0001 | 11 |
5 | Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs. | IEEE Trans. Comput. Soc. Syst. | 2024 | Haibin Zheng, Haiyang Xiong, Jinyin Chen, Haonan Ma, Guohan Huang | 11 |
6 | Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation. | IEICE Trans. Fundam. Electron. Commun. Comput. Sci. | 2024 | Shingo Yashiki, Chako Takahashi, Koutarou Suzuki | 107 |
7 | Poster: Multi-target & Multi-trigger Backdoor Attacks on Graph Neural Networks. | CCS | 2023 | Jing Xu, Stjepan Picek | |
8 | Detecting Backdoors in Collaboration Graphs of Software Repositories. | CODASPY | 2023 | Tom Ganz, Inaam Ashraf, Martin Härterich, Konrad Rieck | |
9 | A semantic backdoor attack against Graph Convolutional Networks. | CoRR | 2023 | Jiazhu Dai, Zhipeng Xiong | abs/2302.14353 |
10 | OCGEC: One-class Graph Embedding Classification for DNN Backdoor Detection. | CoRR | 2023 | Haoyu Jiang, Haiyang Yu, Nan Li, Ping Yi | abs/2312.01585 |
11 | PoisonedGNN: Backdoor Attack on Graph Neural Networks-based Hardware Security Systems. | CoRR | 2023 | Lilas Alrahis, Satwik Patnaik, Muhammad Abdullah Hanif, Muhammad Shafique 0001, Ozgur Sinanoglu | abs/2303.14009 |
12 | Steganography for Neural Radiance Fields by Backdooring. | CoRR | 2023 | Weina Dong, Jia Liu, Yan Ke, Lifeng Chen, Wenquan Sun, Xiaozhong Pan | abs/2309.10503 |
13 | XGBD: Explanation-Guided Graph Backdoor Detection. | ECAI | 2023 | Zihan Guan 0001, Mengnan Du, Ninghao Liu | |
14 | Black-Box Graph Backdoor Defense. | ICA3PP | 2023 | Xiao Yang, Gaolei Li, Xiaoyi Tao, Chaofeng Zhang, Jianhua Li | |
15 | Graph Contrastive Backdoor Attacks. | ICML | 2023 | Hangfan Zhang, Jinghui Chen, Lu Lin 0001, Jinyuan Jia, Dinghao Wu | |
16 | $\tt{PoisonedGNN}$: Backdoor Attack on Graph Neural Networks-Based Hardware Security Systems. | IEEE Trans. Computers | 2023 | Lilas Alrahis, Satwik Patnaik, Muhammad Abdullah Hanif, Muhammad Shafique 0001, Ozgur Sinanoglu | 72 |
17 | Rethinking the Trigger-injecting Position in Graph Backdoor Attack. | IJCNN | 2023 | Jing Xu, Gorka Abad, Stjepan Picek | |
18 | Feature-Based Graph Backdoor Attack in the Node Classification Task. | Int. J. Intell. Syst. | 2023 | Yang Chen, Zhonglin Ye, Haixing Zhao, Ying Wang | 2023 |
19 | Unnoticeable Backdoor Attacks on Graph Neural Networks. | WWW | 2023 | Enyan Dai, Minhua Lin, Xiang Zhang 0001, Suhang Wang | |
20 | Defending Against Backdoor Attack on Graph Nerual Network by Explainability. | CoRR | 2022 | Bingchen Jiang, Zhao Li | abs/2209.02902 |
21 | Neighboring Backdoor Attacks on Graph Convolutional Network. | CoRR | 2022 | Liang Chen 0001, Qibiao Peng, Jintang Li, Yang Liu 0245, Jiawei Chen 0007, Yong Li, Zibin Zheng | abs/2201.06202 |
22 | Backdooring Post-Quantum Cryptography: Kleptographic Attacks on Lattice-based KEMs. | IACR Cryptol. ePrint Arch. | 2022 | Prasanna Ravi, Shivam Bhasin, Anupam Chattopadhyay, Aikata, Sujoy Sinha Roy | 2022 |
23 | Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation. | IJCAI | 2022 | Jun Xia, Ting Wang 0001, Jiepin Ding, Xian Wei, Mingsong Chen | |
24 | Transferable Graph Backdoor Attack. | RAID | 2022 | Shuiqiao Yang, Bao Gia Doan, Paul Montague, Olivier Y. de Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, Salil S. Kanhere | |
25 | A General Backdoor Attack to Graph Neural Networks Based on Explanation Method. | TrustCom | 2022 | Luyao Chen, Na Yan, Boyang Zhang, Zhaoyang Wang, Yu Wen, Yanfei Hu | |
26 | A General Framework for Defending Against Backdoor Attacks via Influence Graph. | CoRR | 2021 | Xiaofei Sun, Jiwei Li 0001, Xiaoya Li, Ziyao Wang, Tianwei Zhang 0004, Han Qiu 0001, Fei Wu 0001, Chun Fan | abs/2111.14309 |
27 | Backdoor Attack of Graph Neural Networks Based on Subgraph Trigger. | CollaborateCom | 2021 | Yu Sheng, Rong Chen, Guanyu Cai, Li Kuang | |
28 | Bias Busters: Robustifying DL-Based Lithographic Hotspot Detectors Against Backdooring Attacks. | IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. | 2021 | Kang Liu 0017, Benjamin Tan 0001, Gaurav Rajavendra Reddy, Siddharth Garg, Yiorgos Makris, Ramesh Karri | 40 |
29 | Training Data Poisoning in ML-CAD: Backdooring DL-Based Lithographic Hotspot Detectors. | IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. | 2021 | Kang Liu 0017, Benjamin Tan 0001, Ramesh Karri, Siddharth Garg | 40 |
30 | Backdoor Attacks to Graph Neural Networks. | SACMAT | 2021 | Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong | |
31 | Graph Backdoor. | USENIX Security Symposium | 2021 | Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang 0006 | |
32 | Explainability-based Backdoor Attacks Against Graph Neural Networks. | WiseML@WiSec | 2021 | Jing Xu, Minhui Xue, Stjepan Picek | |
33 | Cryptographic Primitives that Resist Backdooring and Subversion. | 2020 | |||
34 | Cryptography with Disposable Backdoors. | Cryptogr. | 2019 | Kai-Min Chung, Marios Georgiou 0001, Ching-Yi Lai, Vassilis Zikas | 3 |
35 | On Cryptographic Attacks Using Backdoors for SAT. | AAAI | 2018 | Alexander A. Semenov, Oleg Zaikin 0002, Ilya V. Otpuschennikov, Stepan Kochemazov, Alexey Ignatiev | |
36 | Cryptography with Dispensable Backdoors. | IACR Cryptol. ePrint Arch. | 2018 | Kai-Min Chung, Marios Georgiou 0001, Ching-Yi Lai, Vassilis Zikas | 2018 |
37 | Controlled Randomness - A Defense Against Backdoors in Cryptographic Devices. | Mycrypt | 2016 | Lucjan Hanzlik, Kamil Kluczniak, Miroslaw Kutylowski |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIP. | CoRR | 2024 | Ruinan Jin, Chun-Yin Huang, Chenyu You, Xiaoxiao Li | abs/2401.01911 |
2 | Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models. | CoRR | 2024 | Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong | abs/2402.14977 |
3 | Backdoor Attacks to Pre-trained Unified Foundation Models. | CoRR | 2023 | Zenghui Yuan, Yixin Liu, Kai Zhang 0039, Pan Zhou, Lichao Sun 0001 | abs/2302.09360 |
No. | Title | Venue | Year | Author | Volume |
---|---|---|---|---|---|
1 | BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning. | AAAI | 2024 | Jing Cui, Yufei Han, Yuzhe Ma, Jianbin Jiao, Junge Zhang | |
2 | Recover Triggered States: Protect Model Against Backdoor Attack in Reinforcement Learning. | CoRR | 2023 | Hao Chen, Chen Gong 0005, Yizhe Wang, Xinwen Hou | abs/2304.00252 |
3 | Backdoor Attacks on Multi-Agent Reinforcement Learning-based Spectrum Management. | GLOBECOM | 2023 | Hongyi Zhang, Mingqian Liu, Yunfei Chen 0001 | |
4 | PolicyCleanse: Backdoor Detection and Mitigation for Competitive Reinforcement Learning. | ICCV | 2023 | Junfeng Guo, Ang Li, Lixu Wang, Cong Liu 0005 | |
5 | MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning. | IEEE Trans. Dependable Secur. Comput. | 2023 | Yanjiao Chen, Zhicong Zheng, Xueluan Gong | 20 |
6 | BIRD: Generalizable Backdoor Detection and Removal for Deep Reinforcement Learning. | NeurIPS | 2023 | Xuan Chen, Wenbo Guo 0002, Guanhong Tao 0001, Xiangyu Zhang 0001, Dawn Song | |
7 | Backdoor attacks against deep reinforcement learning based traffic signal control systems. | Peer Peer Netw. Appl. | 2023 | Heng Zhang 0001, Jun Gu, Zhikun Zhang 0001, Linkang Du, Yongmin Zhang, Yan Ren, Jian Zhang 0002, Hongran Li | 16 |
8 | Backdoor Detection in Reinforcement Learning. | CoRR | 2022 | Junfeng Guo, Ang Li, Cong Liu 0005 | abs/2202.03609 |
9 | Mind Your Data! Hiding Backdoors in Offline Reinforcement Learning Datasets. | CoRR | 2022 | Chen Gong 0005, Zhou Yang 0003, Yunpeng Bai, Junda He, Jieke Shi, Arunesh Sinha, Bowen Xu, Xinwen Hou, Guoliang Fan, David Lo 0001 | abs/2210.04688 |
10 | A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning. | GLOBECOM | 2022 | Yinbo Yu, Jiajia Liu 0001, Shouqing Li, Kepu Huang, Xudong Feng | |
11 | Provable Defense against Backdoor Policies in Reinforcement Learning. | NeurIPS | 2022 | Shubham Kumar Bharti, Xuezhou Zhang, Adish Singla, Jerry Zhu | |
12 | Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-Based Traffic Congestion Control Systems. | IEEE Trans. Inf. Forensics Secur. | 2021 | Yue Wang 0055, Esha Sarkar, Wenqing Li, Michail Maniatakos, Saif Eddin Jabari | 16 |
13 | BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning. | IJCAI | 2021 | Lun Wang 0001, Zaynah Javed, Xian Wu, Wenbo Guo 0002, Xinyu Xing, Dawn Song | |
14 | Watch your back: Backdoor Attacks in Deep Reinforcement Learning-based Autonomous Vehicle Control Systems. | CoRR | 2020 | Yue Wang 0055, Esha Sarkar, Michail Maniatakos, Saif Eddin Jabari | abs/2003.07859 |
15 | TrojDRL: Evaluation of Backdoor Attacks on Deep Reinforcement Learning. | DAC | 2020 | Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li 0001 |
Please visit the full list (shinyapps) for more papers and better search experience.