This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers. https://developer.ibm.com/code/open/p…
Switch branches/tags
Clone or download
ririnicolae Merge branch 'master' into release
Changes:
  - new defenses: JPEG compression, total variance minimization
  - optimization of NewtonFool attack to run batches
  - small optimizations/ bug fix in Carlini attack
  - small optimizations in DeepFool
  - changes to compute class gradients for a list of targets for all classifiers
  - minor changes in visualization module and utils
  - updates in docs, examples and notebooks

Conflicts:
	art/defences/__init__.py
	art/utils.py
	requirements.txt
Latest commit ee3b7ac Nov 16, 2018

README.md

Adversarial Robustness Toolbox (ART v0.3.0)

Build Status Documentation Status GitHub version Language grade: Python Total alerts

This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

The library is still under development. Feedback, bug reports and extensions are highly appreciated. Get in touch with us on Slack (invite here)!

Supported attack and defense methods

The library contains implementations of the following evasion attacks:

The following defence methods are also supported:

ART also implements detection methods of adversarial samples:

  • Basic detector based on inputs
  • Detector trained on the activations of a specific layer

The following detector of poisoning attacks is also supported:

Setup

Installation with pip

The toolbox is designed to run with Python 2 and 3. The library can be installed from the PyPi repository using pip:

pip install adversarial-robustness-toolbox

Manual installation

For the most recent version of the library, either download the source code or clone the repository in your directory of choice:

git clone https://github.com/IBM/adversarial-robustness-toolbox

To install ART, do the following in the project folder:

pip install .

The library comes with a basic set of unit tests. To check your install, you can run all the unit tests by calling the test script in the install folder:

bash run_tests.sh

Running ART

Some examples of how to use ART when writing your own code can be found in the examples folder. See examples/README.md for more information about what each example does. To run an example, use the following command:

python examples/<example_name>.py

The notebooks folder contains Jupyter notebooks with detailed walkthroughs of some usage scenarios.

Citing ART

If you use ART for research, please consider citing the following reference paper:

@article{art2018,
    title = {Adversarial Robustness Toolbox v0.3.0},
    author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
    journal = {CoRR},
    volume = {1807.01069}
    year = {2018},
    url = {https://arxiv.org/pdf/1807.01069}
}