Skip to content

Commit

Permalink
updated the eagerpy reference
Browse files Browse the repository at this point in the history
  • Loading branch information
Jonas Rauber committed Aug 11, 2020
1 parent 8394996 commit 7346233
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
8 changes: 4 additions & 4 deletions paper/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@ @inproceedings{rauber2017foolbox
url={https://arxiv.org/abs/1707.04131},
}

@misc{eagerpy,
title={{EagerPy}: Writing Code That Works Natively with {PyTorch}, {TensorFlow}, {JAX} and {NumPy}},
@article{rauber2020eagerpy,
title={{EagerPy}: Writing Code That Works Natively with {PyTorch}, {TensorFlow}, {JAX}, and {NumPy}},
author={Rauber, Jonas and Bethge, Matthias and Brendel, Wieland},
url={https://eagerpy.jonasrauber.de},
journal={arXiv preprint arXiv:2008.04175},
year={2020},
note={Manuscript in preparation},
url={https://eagerpy.jonasrauber.de},
}

@misc{numpy,
Expand Down
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Machine learning has made enormous progress in recent years and is now being use

# Statement of need

Evaluating the adversarial robustness of machine learning models is crucial to understanding their shortcomings and quantifying the implications on safety, security, and interpretability. Foolbox Native is the first adversarial robustness toolbox that is both fast and framework-agnostic. This is important because modern machine learning models such as deep neural networks are often computationally expensive and are implemented in different frameworks such as PyTorch and TensorFlow. Foolbox Native combines the framework-agnostic design of the original Foolbox [@rauber2017foolbox] with real batch support and native performance in PyTorch, TensorFlow, and JAX, all using a single codebase without code duplication. To achieve this, all adversarial attacks have been rewritten from scratch and now use EagerPy [@eagerpy] instead of NumPy [@numpy] to interface *natively* with the different frameworks.
Evaluating the adversarial robustness of machine learning models is crucial to understanding their shortcomings and quantifying the implications on safety, security, and interpretability. Foolbox Native is the first adversarial robustness toolbox that is both fast and framework-agnostic. This is important because modern machine learning models such as deep neural networks are often computationally expensive and are implemented in different frameworks such as PyTorch and TensorFlow. Foolbox Native combines the framework-agnostic design of the original Foolbox [@rauber2017foolbox] with real batch support and native performance in PyTorch, TensorFlow, and JAX, all using a single codebase without code duplication. To achieve this, all adversarial attacks have been rewritten from scratch and now use EagerPy [@rauber2020eagerpy] instead of NumPy [@numpy] to interface *natively* with the different frameworks.

This is great for both users and developers of adversarial attacks. Users can efficiently evaluate the robustness of different models in different frameworks using the same set of state-of-the-art adversarial attacks, thus obtaining comparable results. Attack developers do not need to choose between supporting just one framework or reimplementing their new adversarial attack multiple times and dealing with code duplication. In addition, they both benefit from the comprehensive type annotations [@pep484] in Foolbox Native to catch bugs even before running their code.

Expand Down

0 comments on commit 7346233

Please sign in to comment.