Skip to content
/ ML-Talk Public

πŸ“„ [Talk] OFFZONE 2022 / ODS Data Halloween 2022: Black-box attacks on ML models + with use of open-source tools

Notifications You must be signed in to change notification settings

qwqoro/ML-Talk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

gif

[ Have a look at the presentation slides: slides-OFFZONE.pdf / slides-ODS.pdf ]
[ Related demonstration (Jupyter notebook): demo.ipynb ]

Overview | Attacks | Tools | More on the topic


An overview of black-box attacks on AI and tools that might be useful during security testing of machine learning models.

πŸ“¦ Overview

demo.ipynb:
A demonstration of use of multifunctional tools during security testing of machine learning models digits_blackbox & digits_keras trained on the MNIST dataset and provided in Counterfit as example targets.

Slides:
 – Machine Learning in products
 – Threats to Machine Learning models
 – Example model overview
 – Evasion attacks
 – Model inversion attacks
 – Model extraction attacks
 – Defences
 – Adversarial Robustness Toolbox
 – Counterfit

βš”οΈ Attacks

πŸ”§ Tools

 – [ Trusted AI, IBM ] Adversarial Robustness Toolbox (ART): :octocat: Trusted-AI/adversarial-robustness-toolbox
 – [ Microsoft Azure ] Counterfit: :octocat: Azure/counterfit

πŸ“‘ More on the topic