Reading list for adversarial perspective and robustness in deep reinforcement learning.
-
Updated
Apr 10, 2025
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Do you want to learn AI Security but don't know where to start ? Take a look at this map.
XSSGAI is the first-ever AI-powered XSS (Cross-Site Scripting) payload generator. It leverages machine learning and deep learning to create novel payloads based on patterns from real-world XSS attacks.
Measure and Boost Backdoor Robustness
Minimal reproducible PoC of 3 ML attacks (adversarial, extraction, membership inference) on a credit scoring model. Includes pipeline, visualizations, and defenses
AAAI 2025 Tutorial on Machine Learning Safety
A curated collection of privacy-preserving machine learning techniques, tools, and practical evaluations. Focuses on differential privacy, federated learning, secure computation, and synthetic data generation for implementing privacy in ML workflows.
Awesome-DL-Security-and-Privacy-Papers
Code for "On the Privacy Effect of Data Enhancement via the Lens of Memorization"
This is a software framework that can be used for the evaluation of the robustness of Malware Detection methods with respect to adversarial attacks.
Add a description, image, and links to the ml-security topic page so that developers can more easily learn about it.
To associate your repository with the ml-security topic, visit your repo's landing page and select "manage topics."