Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CW efficiency improvement and bug fix, add CW binary search version, early stop PGD version, support L0 and Linf for CW and CWBS, rewrite FAB attack, fix MI-FGSM bug, rewrite JSMA. #168

Open
wants to merge 65 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
c407e6c
Improve the efficiency of the CW attack and fix an error in the calcu…
rikonaka Nov 12, 2023
64d9d86
Remove some note
rikonaka Nov 12, 2023
7a1ce4a
Fix some mistakes
rikonaka Nov 12, 2023
67a338f
Unify two CW algorithms and reduce unnecessary operations
rikonaka Nov 13, 2023
4bdae87
Change CW and CWBS `abort_early` default value from `False` to `True`
rikonaka Nov 17, 2023
1e82289
Add L0 and Linf for CW and CWBS
rikonaka Nov 19, 2023
ae89b03
Split cw
rikonaka Nov 19, 2023
8aac46b
Fix some class name
rikonaka Nov 19, 2023
c955109
Fix some info
rikonaka Nov 19, 2023
6cb7015
Add readme
rikonaka Nov 19, 2023
4adf68b
Fix CW and CWBS L0 error
rikonaka Nov 19, 2023
628b829
Change some name avoid misunderstood
rikonaka Nov 19, 2023
f89bcda
Rename some var avoid misunderstood
rikonaka Nov 24, 2023
d870493
Fix `other` as ZaberKo suggest
rikonaka Nov 24, 2023
a464809
Clone the attack result avoid pytorch error `Saved intermediate value…
rikonaka Nov 29, 2023
6ba76d4
Move `CWL2` to `CW` as Adversarian suggestion
rikonaka Nov 29, 2023
9762cab
Add parameter type restrictions
rikonaka Dec 2, 2023
894cf2a
Remove type Dict
rikonaka Dec 2, 2023
a93a499
Fix numpy type
rikonaka Dec 2, 2023
f02099e
Add Early-Stopped PGD Version
rikonaka Jan 30, 2024
e1ad5e0
Remove robustbench
rikonaka Jan 30, 2024
5d5550c
Use local model to test
rikonaka Jan 30, 2024
a2575ea
Use local model to test
rikonaka Jan 30, 2024
aa986be
Use local model to test
rikonaka Jan 30, 2024
bbc802c
Use local model to test
rikonaka Jan 30, 2024
0db17c2
Use local model to test
rikonaka Jan 30, 2024
3b81246
Try to fix auto test bug
rikonaka Jan 31, 2024
449b0d5
Try to fix auto test bug
rikonaka Jan 31, 2024
bac7731
Remove test_atk.py `try`, let the program visually expose the problem
rikonaka Jan 31, 2024
1eb3f60
Remove test_atk.py `try`, let the program visually expose the problem
rikonaka Jan 31, 2024
8afcabf
Remove test_atk.py `try`, let the program visually expose the problem
rikonaka Jan 31, 2024
add20d4
Small fix
rikonaka Jan 31, 2024
463b471
Fix PGDES default parameter value
rikonaka Jan 31, 2024
b491f83
Fix PGDES name in info
rikonaka Feb 1, 2024
6381b5d
Add PyTroch 2.1 and 2.2 test
rikonaka Feb 2, 2024
bb32c37
Remove duplicate imports
rikonaka Feb 13, 2024
d614f92
Auto-test small fix
rikonaka Feb 14, 2024
df65dea
Rewrite the EAD algorithm to make the code logically closer to the CW…
rikonaka Feb 14, 2024
66de69b
Rewrite the EAD algorithm to make the code logically closer to the CW…
rikonaka Feb 14, 2024
c08b63e
Fix EADEN name mistake
rikonaka Feb 14, 2024
74ea149
Remove one duplicate line in EAD attack
rikonaka Feb 14, 2024
b761501
Rename PGDES to ESPGD
rikonaka Feb 23, 2024
784b4f4
Rename PGDES to ESPGD
rikonaka Feb 23, 2024
21ae294
Fix some name error
rikonaka Mar 31, 2024
866d14a
Fix some name error
rikonaka Mar 31, 2024
be99cb3
Re-write FAB attack
rikonaka Mar 31, 2024
66a2e13
Add info in readme and fix name mistake
rikonaka Mar 31, 2024
9701743
Fix type error
rikonaka Mar 31, 2024
8dedd74
Fix type error
rikonaka Mar 31, 2024
1ca9fe3
Fix cuda error
rikonaka Mar 31, 2024
5e9c07a
Fix autoattack FAB attack bug
rikonaka Apr 1, 2024
59e5c34
Fix target attack how labels input problems
rikonaka Apr 1, 2024
3c12bbc
Fix target attack how labels input problems
rikonaka Apr 1, 2024
c696144
Add target attack for FAB
rikonaka Apr 1, 2024
1386710
Fix L1 some mistakes
rikonaka Apr 1, 2024
53d35ec
Fix L1 some mistakes
rikonaka Apr 1, 2024
58d55d4
The code on momentum in the original mi-fgsm is complex and lacks cor…
rikonaka Apr 15, 2024
dbb4942
The code on momentum in the original mi-fgsm is complex and lacks cor…
rikonaka Apr 15, 2024
411e41e
Try to fix JSMA huge GPU mem usage
rikonaka Jun 12, 2024
b818ee8
New JSMA attack code v1
rikonaka Jun 23, 2024
8c065ec
New JSMA attack code v2
rikonaka Jun 23, 2024
21053c7
Merge branch 'Harry24k:master' into master
rikonaka Jun 23, 2024
bfc45d9
Fix JSMA targeted attack bug
rikonaka Jun 23, 2024
a40c9a4
NumPy has been updated to version 2, but it seems that many programs …
rikonaka Jun 28, 2024
8e6815c
NumPy has been updated to version 2, but it seems that many programs …
rikonaka Jun 28, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 27 additions & 2 deletions .github/workflows/build_coverage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,36 +23,61 @@ jobs:
matrix:
os:
- ubuntu-latest
python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"]
python-version: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12"]
# Bugs: 3.10 will become 3.1 if without quotes -> https://github.com/actions/setup-python/issues/695
# For ubuntu 22.04: https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json
pytorch-version: ["1.9.1", "1.10.1", "1.11.0", "1.12.1", "1.13.1", "2.0.1"]
pytorch-version: ["1.9.1", "1.10.1", "1.11.0", "1.12.1", "1.13.1", "2.0.1", "2.1.2", "2.2.0", "2.3.1"]
# 1.5.1, 1.4.0 Model load error in robustbench.
# 1.8.1, 1.7.1, 1.6.0 'padding==same' error in TIFGSM
exclude:
# https://github.com/pytorch/vision#installation
# pytorch 2.3 support python from 3.8 to 3.12
- pytorch-version: "2.3.1"
python-version: "3.7"
# pytorch 2.2 support python from 3.8 to 3.11
- pytorch-version: "2.2.0"
python-version: "3.7"
- pytorch-version: "2.2.0"
python-version: "3.12"
# pytorch 2.1 support python from 3.8 to 3.11
- pytorch-version: "2.1.2"
python-version: "3.7"
- pytorch-version: "2.1.2"
python-version: "3.12"
# pytorch 2.0 support python from 3.8 to 3.11
- pytorch-version: "2.0.1"
python-version: "3.7"
- pytorch-version: "2.0.1"
python-version: "3.12"
# pytorch 1.13 support python from 3.7.2 to 3.10
- pytorch-version: "1.13.1"
python-version: "3.11"
- pytorch-version: "1.13.1"
python-version: "3.12"
# pytorch 1.12 support python from 3.7 to 3.10
- pytorch-version: "1.12.1"
python-version: "3.11"
- pytorch-version: "1.12.1"
python-version: "3.12"
# pytorch 1.11 support python from 3.7 to 3.10
- pytorch-version: "1.11.0"
python-version: "3.11"
- pytorch-version: "1.11.0"
python-version: "3.12"
# pytorch 1.10 support python from 3.6 to 3.9
- pytorch-version: "1.10.1"
python-version: "3.10"
- pytorch-version: "1.10.1"
python-version: "3.11"
- pytorch-version: "1.10.1"
python-version: "3.12"
# pytorch 1.9 support python from 3.6 to 3.9
- pytorch-version: "1.9.1"
python-version: "3.10"
- pytorch-version: "1.9.1"
python-version: "3.11"
- pytorch-version: "1.9.1"
python-version: "3.12"

runs-on: ${{ matrix.os }}

Expand Down
28 changes: 14 additions & 14 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,28 +1,28 @@
.vscode/
__pycache__
.ipynb_checkpoints/*
.ipynb_checkpoints/
debug.log

build/*
_build
dist/*
torchattacks.egg-info/*
data/*
models/*
build/
dist/
torchattacks.egg-info/
data/
*/.*
MENIFEST.in
setup.cfg
_commit.bat
code_coverage/data/

autoattack/*
autoattack/
.pytest_cache/

demo/_*
demo/data/*
demo/models/*
demo/torchdefenses/*
demo/robustbench/*
demo/autoattack/*
demo/data
demo/models
demo/torchdefenses
demo/autoattack

TODO.txt
.vscode/
coverage.xml
.coverage
black.ipynb
23 changes: 12 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,10 +94,10 @@ pip install -e .
```
* By label
```python
atk.set_mode_targeted_by_label(quiet=True)
# shift all class loops one to the right, 1=>2, 2=>3, .., 9=>0
target_labels = (labels + 1) % 10
adv_images = atk(images, target_labels)
atk.set_mode_targeted_by_label(target_labels=target_labels, quiet=True)
adv_images = atk(images, labels)
```
* Return to default
```python
Expand Down Expand Up @@ -128,12 +128,6 @@ pip install -e .
atk2 = torchattacks.PGD(model, eps=8/255, alpha=2/255, iters=40, random_start=True)
atk = torchattacks.MultiAttack([atk1, atk2])
```
* Binary search for CW
```python
atk1 = torchattacks.CW(model, c=0.1, steps=1000, lr=0.01)
atk2 = torchattacks.CW(model, c=1, steps=1000, lr=0.01)
atk = torchattacks.MultiAttack([atk1, atk2])
```
* Random restarts
```python
atk1 = torchattacks.PGD(model, eps=8/255, alpha=2/255, iters=40, random_start=True)
Expand All @@ -160,7 +154,7 @@ The distance measure in parentheses.
| **EOTPGD**<br />(Linf) | Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network" ([Zimmermann, 2019](https://arxiv.org/abs/1907.00895)) | [EOT](https://arxiv.org/abs/1707.07397)+PGD |
| **APGD**<br />(Linf, L2) | Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks ([Croce et al., 2020](https://arxiv.org/abs/2001.03994)) | |
| **APGDT**<br />(Linf, L2) | Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks ([Croce et al., 2020](https://arxiv.org/abs/2001.03994)) | Targeted APGD |
| **FAB**<br />(Linf, L2, L1) | Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack ([Croce et al., 2019](https://arxiv.org/abs/1907.02044)) | |
| **AFAB**<br />(Linf, L1, L2) | Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack ([Croce et al., 2019](https://arxiv.org/abs/1907.02044)) | |
| **Square**<br />(Linf, L2) | Square Attack: a query-efficient black-box adversarial attack via random search ([Andriushchenko et al., 2019](https://arxiv.org/abs/1912.00049)) | |
| **AutoAttack**<br />(Linf, L2) | Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks ([Croce et al., 2020](https://arxiv.org/abs/2001.03994)) | APGD+APGDT+FAB+Square |
| **DeepFool**<br />(L2) | DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks ([Moosavi-Dezfooli et al., 2016](https://arxiv.org/abs/1511.04599)) | |
Expand All @@ -181,8 +175,15 @@ The distance measure in parentheses.
| **EADEN**<br />(L1, L2) | EAD: Elastic-Net Attacks to Deep Neural Networks ([Chen, Pin-Yu, et al., 2018](https://arxiv.org/abs/1709.04114)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **PIFGSM (PIM)**<br />(Linf) | Patch-wise Attack for Fooling Deep Neural Network ([Gao, Lianli, et al., 2020](https://arxiv.org/abs/2007.06765)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **PIFGSM++ (PIM++)**<br />(Linf) | Patch-wise++ Perturbation for Adversarial Targeted Attacks ([Gao, Lianli, et al., 2021](https://arxiv.org/abs/2012.15503)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |


| **CWL0**<br />(L0) | Towards Evaluating the Robustness of Neural Networks ([Carlini N, Wagner D, 2017](https://arxiv.org/abs/1608.046443)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **CWLinf**<br />(Linf) | Towards Evaluating the Robustness of Neural Networks ([Carlini N, Wagner D, 2017](https://arxiv.org/abs/1608.046443)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **CWBSL0 (Binary Search Version)**<br />(L0) | Towards Evaluating the Robustness of Neural Networks ([Carlini N, Wagner D, 2017](https://arxiv.org/abs/1608.046443)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **CWBSL2 (Binary Search Version)**<br />(L2) | Towards Evaluating the Robustness of Neural Networks ([Carlini N, Wagner D, 2017](https://arxiv.org/abs/1608.046443)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **CWBSLinf (Binary Search Version)**<br />(Linf) | Towards Evaluating the Robustness of Neural Networks ([Carlini N, Wagner D, 2017](https://arxiv.org/abs/1608.046443)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **ESPGD (Early-Stopped PGD Version)**<br />(Linf) | Attacks Which Do Not Kill Training Make Adversarial Learning Stronger ([Zhang, Jingfeng, 2020](https://arxiv.org/abs/2002.11242)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **FAB**<br />(Linf) | Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack ([Croce et al., 2019](https://arxiv.org/abs/1907.02044)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **FABL1**<br />(L1) | Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack ([Croce et al., 2019](https://arxiv.org/abs/1907.02044)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |
| **FABL2**<br />(L2) | Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack ([Croce et al., 2019](https://arxiv.org/abs/1907.02044)) | :heart_eyes: Contributor [Riko Naka](https://github.com/rikonaka) |

## :bar_chart: Performance Comparison

Expand Down
Binary file added code_coverage/images.pth
Binary file not shown.
Binary file added code_coverage/labels.pth
Binary file not shown.
Binary file added code_coverage/resnet18_eval.pth
Binary file not shown.
25 changes: 25 additions & 0 deletions code_coverage/script/pickle_cifar10.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
import torch
from torchvision import datasets, transforms

transform_test = transforms.Compose([
transforms.ToTensor(),
])
testset = datasets.CIFAR10(
root='../data', train=False, download=True, transform=transform_test)
test_loader = torch.utils.data.DataLoader(
testset, batch_size=10, shuffle=False)


def split(testloader):
for (x, y) in testloader:
torch.save(x, 'images.pth')
torch.save(y, 'labels.pth')
break


def main():
split(test_loader)


if __name__ == '__main__':
main()
117 changes: 117 additions & 0 deletions code_coverage/script/resnet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
import torch
import torch.nn as nn
import torch.nn.functional as F


class BasicBlock(nn.Module):
expansion = 1

def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU()

self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion * planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)

def forward(self, x):
out = self.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out


class Bottleneck(nn.Module):
expansion = 4

def __init__(self, in_planes, planes, stride=1):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, self.expansion * planes, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(self.expansion * planes)
self.relu = nn.ReLU()

self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion * planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)

def forward(self, x):
out = self.relu(self.bn1(self.conv1(x)))
out = self.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += self.shortcut(x)
out = F.relu(out)
return out


class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_planes = 64

self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
self.linear = nn.Linear(512 * block.expansion, num_classes)

def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1] * (num_blocks - 1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)

def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out


def ResNet18(num_classes=10):
return ResNet(BasicBlock, [2, 2, 2, 2])


def ResNet34():
return ResNet(BasicBlock, [3, 4, 6, 3])


def ResNet50():
return ResNet(Bottleneck, [3, 4, 6, 3])


def ResNet101():
return ResNet(Bottleneck, [3, 4, 23, 3])


def ResNet152():
return ResNet(Bottleneck, [3, 8, 36, 3])


def test():
net = ResNet18()
y = net(torch.randn(1, 3, 32, 32))
print(y.size())
Loading
Loading