Skip to content
Animesh Singh edited this page Apr 30, 2018 · 21 revisions

Welcome to the Adversarial Robustness Toolbox wiki!

Exposing and fixing vulnerabilities in software systems is nothing new. There are multiple insidious ways that malicious or bad actors are finding to exploit vulnerabilities in AI systems. The potential for adversarial AI to trick both humans and computers is huge. When we are looking at the usage of AI, for example, in self-driving autonomous vehicles, and what a potential image data set contamination can lead to, the results can be really scary.

Researchers, AI developers, and data scientists are getting together to tackle the tough questions:

  • Do we know where every data item in the training and test sets came from and whether they have been tampered with?
  • Do we know how to filter and transform input to AI systems in a wide enough range of ways to have confidence that the outcome is robust?
  • Do we have ways to test the output of classifiers to ensure they are not brittle?

Announcing the Adversarial Robustness Toolbox

To counter these threats, IBM Research Ireland is releasing the Adversarial Robustness Toolbox (ART), a software library to support both researchers and developers in defending DNNs against adversarial attacks, making AI systems more secure.

The Adversarial Robustness Toolbox is designed to support researchers and AI developers in creating novel defense techniques and deploying practical defenses of real-world AI systems. For AI developers, the library provides interfaces that support the composition of comprehensive defense systems using individual methods as building blocks.

ART provides an implementation for many state-of-the-art methods for attacking visual recognition classifiers. For example:

  • Deep Fool
  • Fast Gradient Method
  • Jacobian Saliency Map

On the other side of the spectrum, defense methods are also supported. For example:

  • Feature squeezing
  • Spatial smoothing
  • Label smoothing

The details behind the work from IBM research can be found in the research paper. The ART toolbox is developed with the goal of helping developers better understand

  • Measuring model robustness
  • Model hardening
  • Runtime detection

Available in open source and supporting multiple frameworks!

In partnership with IBM’s Center for Open-Source Data and Artificial Intelligence Technologies (CODAIT), IBM Research also recently released Fabric for Deep Learning (FfDL) , which provides a consistent way to deploy, train, and visualize deep learning jobs across multiple frameworks like TensorFlow, Caffe, PyTorch, and Keras. With the Adversarial Robustness Toolbox, we are taking this multi-framework support forward.

You can take these libraries and launch attacks on FfDL trained models, or use Deep Learning as a service within Watson Studio.

Links:

ART IBM research paper ART GitHub repository FfDL GitHub repository FfDL blog DLaaS in Watson

Clone this wiki locally