Reading list for adversarial perspective and robustness in deep reinforcement learning.
-
Updated
Jul 25, 2025
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Do you want to learn AI Security but don't know where to start ? Take a look at this map.
XSSGAI is the first-ever AI-powered XSS (Cross-Site Scripting) payload generator. It leverages machine learning and deep learning to create novel payloads based on patterns from real-world XSS attacks.
Measure and Boost Backdoor Robustness
An open-source knowledge base of defensive countermeasures to protect AI/ML systems. Features interactive views and maps defenses to known threats from frameworks like MITRE ATLAS, MAESTRO, and OWASP.
Minimal reproducible PoC of 3 ML attacks (adversarial, extraction, membership inference) on a credit scoring model. Includes pipeline, visualizations, and defenses
A curated collection of privacy-preserving machine learning techniques, tools, and practical evaluations. Focuses on differential privacy, federated learning, secure computation, and synthetic data generation for implementing privacy in ML workflows.
AAAI 2025 Tutorial on Machine Learning Safety
This is a software framework that can be used for the evaluation of the robustness of Malware Detection methods with respect to adversarial attacks.
Code for "On the Privacy Effect of Data Enhancement via the Lens of Memorization"
Awesome-DL-Security-and-Privacy-Papers
Control a 5-DOF Lynxmotion robotic arm using a vision language model for object detection and task planning
Add a description, image, and links to the ml-security topic page so that developers can more easily learn about it.
To associate your repository with the ml-security topic, visit your repo's landing page and select "manage topics."