Welcome to Foolbox
Foolbox is a Python toolbox to create adversarial examples that fool neural networks.
It comes with support for many frameworks to build models including
and it is easy to extend to other frameworks.
In addition, it comes with a large collection of adversarial attacks, both gradient-based attacks as well as black-box attacks. See :doc:`modules/attacks` for details.
Robust Vision Benchmark
You might want to have a look at our recently announced Robust Vision Benchmark, a benchmark for adversarial attacks and the robustness of machine learning models.
.. toctree:: :maxdepth: 2 :caption: User Guide user/installation user/tutorial user/examples user/adversarial user/development
.. toctree:: :maxdepth: 2 :caption: API Reference modules/models modules/criteria modules/distances modules/attacks modules/adversarial modules/utils