Skip to content

Commit

Permalink
version bump
Browse files Browse the repository at this point in the history
  • Loading branch information
spencerwooo committed Oct 31, 2023
1 parent 73b10ad commit ba2d715
Show file tree
Hide file tree
Showing 3 changed files with 38 additions and 26 deletions.
52 changes: 33 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@
A set of adversarial attacks implemented in PyTorch. _For internal use._

```shell
# install from github source
# Install from github source
python -m pip install git+https://github.com/daisylab-bit/torchattack

# install from gitee mirror
# Install from gitee mirror
python -m pip install git+https://gitee.com/daisylab-bit/torchattack
```

Expand All @@ -24,7 +24,7 @@ from torchvision.transforms import transforms
from torchattack import FGSM, MIFGSM

# Load a model
model = resnet50(weights="DEFAULT")
model = resnet50(weights='DEFAULT')

# Define transforms (you are responsible for normalizing the data if needed)
transform = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Expand All @@ -36,29 +36,43 @@ attack = FGSM(model, transform, eps=0.03)
attack = MIFGSM(model, transform, eps=0.03, steps=10, decay=1.0)
```

Check out [`torchattack.utils.run_attack`](src/torchattack/utils.py) for a simple example.

## Attacks

Gradient-based attacks:

| Name | Paper | `torchattack` class |
| :--------- | :------------------------------------------------------------------------------------------------------------------------- | :--------------------- |
| FGSM | [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572) | `torchattack.FGSM` |
| PGD | [Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083) | `torchattack.PGD` |
| PGD (L2) | [Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083) | `torchattack.PGDL2` |
| MI-FGSM | [Boosting Adversarial Attacks with Momentum](https://arxiv.org/abs/1710.06081) | `torchattack.MIFGSM` |
| DI-FGSM | [Improving Transferability of Adversarial Examples with Input Diversity](https://arxiv.org/abs/1803.06978) | `torchattack.DIFGSM` |
| TI-FGSM | [Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks](https://arxiv.org/abs/1904.02884) | `torchattack.TIFGSM` |
| NI-FGSM | [Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks](https://arxiv.org/abs/1908.06281) | `torchattack.NIFGSM` |
| SI-NI-FGSM | [Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks](https://arxiv.org/abs/1908.06281) | `torchattack.SINIFGSM` |
| VMI-FGSM | [Enhancing the Transferability of Adversarial Attacks through Variance Tuning](https://arxiv.org/abs/2103.15571) | `torchattack.VMIFGSM` |
| VNI-FGSM | [Enhancing the Transferability of Adversarial Attacks through Variance Tuning](https://arxiv.org/abs/2103.15571) | `torchattack.VNIFGSM` |
| Name | Paper | `torchattack` class |
| :------------------------: | :------------------------------------------------------------------------------------------------------------------------- | :--------------------- |
| FGSM ($\ell_\infty$) | [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572) | `torchattack.FGSM` |
| PGD ($\ell_\infty$) | [Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083) | `torchattack.PGD` |
| PGD ($\ell_2$) | [Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083) | `torchattack.PGDL2` |
| MI-FGSM ($\ell_\infty$) | [Boosting Adversarial Attacks with Momentum](https://arxiv.org/abs/1710.06081) | `torchattack.MIFGSM` |
| DI-FGSM ($\ell_\infty$) | [Improving Transferability of Adversarial Examples with Input Diversity](https://arxiv.org/abs/1803.06978) | `torchattack.DIFGSM` |
| TI-FGSM ($\ell_\infty$) | [Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks](https://arxiv.org/abs/1904.02884) | `torchattack.TIFGSM` |
| NI-FGSM ($\ell_\infty$) | [Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks](https://arxiv.org/abs/1908.06281) | `torchattack.NIFGSM` |
| SI-NI-FGSM ($\ell_\infty$) | [Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks](https://arxiv.org/abs/1908.06281) | `torchattack.SINIFGSM` |
| VMI-FGSM ($\ell_\infty$) | [Enhancing the Transferability of Adversarial Attacks through Variance Tuning](https://arxiv.org/abs/2103.15571) | `torchattack.VMIFGSM` |
| VNI-FGSM ($\ell_\infty$) | [Enhancing the Transferability of Adversarial Attacks through Variance Tuning](https://arxiv.org/abs/2103.15571) | `torchattack.VNIFGSM` |
| Admix ($\ell_\infty$) | [Admix: Enhancing the Transferability of Adversarial Attacks](https://arxiv.org/abs/2102.00436) | `torchattack.Admix` |

Others:

| Name | Paper | `torchattack` class |
| :--------- | :------------------------------------------------------------------------------------------------------------------------- | :--------------------- |
| DeepFool | [DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks](https://arxiv.org/abs/1511.04599) | `torchattack.DeepFool` |
| GeoDA | [GeoDA: A Geometric Framework for Black-box Adversarial Attacks](https://arxiv.org/abs/2003.06468) | `torchattack.GeoDA` |
| Name | Paper | `torchattack` class |
| :--------------------------------: | :------------------------------------------------------------------------------------------------------ | :--------------------- |
| DeepFool ($\ell_2$) | [DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks](https://arxiv.org/abs/1511.04599) | `torchattack.DeepFool` |
| GeoDA ($\ell_\infty$ and $\ell_2$) | [GeoDA: A Geometric Framework for Black-box Adversarial Attacks](https://arxiv.org/abs/2003.06468) | `torchattack.GeoDA` |

## Development

```shell
# Create a virtual environment
python -m venv .venv
source .venv/bin/activate

# Install deps with dev extras
python -m pip install -e '.[dev]'
```

## License

Expand Down
7 changes: 5 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
[project]
name = "torchattack"
version = "0.2.1"
version = "0.2.2"
description = "A set of adversarial attacks implemented in PyTorch"
authors = [{ name = "spencerwooo", email = "spencer.woo@outlook.com" }]
requires-python = ">=3.10"
requires-python = ">=3.10,<3.12"
readme = "README.md"
license = { text = "MIT" }
dependencies = [
Expand All @@ -14,6 +14,9 @@ dependencies = [
"rich>=13.3.5",
]

[project.optional-dependencies]
dev = ["mypy"]

[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
Expand Down
5 changes: 0 additions & 5 deletions requirements.txt

This file was deleted.

0 comments on commit ba2d715

Please sign in to comment.