The code of "Adversarial Metric Attack for Person Re-identification"
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Images
Model
Gen_Adv.py
LICENSE
README.md
adv.sh
concatImage.py
evaluate_adv.py
model.py
prepare.py
resnext.py
test.py

README.md

Adversarial Metric Attack for Person Re-identification

By Song Bai, Yingwei Li, Yuyin Zhou, Qizhu Li, Philip H.S. Torr.

Introduction

This repository contains the code of the paper Adversarial Metric Attack for Person Re-identification. It proposes adversarial metric attack, a parallel methodology to the existing adversarial classification attack. Adversarial metric attack can be used to attack metric-based systems like person re-identification and generate adversarial examples accordingly.

Prerequisites

  • Pytorch 0.4.1
  • Numpy
  • Python 2.7

How to Run

Attack. For example, attack a ResNet-50 model trained with cross-entropy loss

python Gen_Adv.py \
 --loss_type=soft \
 --name=resnet_50 \
 --save_img \
 --save_fea

It will save the adversarial images and features.

Test.

python evaluate_adv.py \
 --loss_type=soft \
 --name=resnet_50

Shell in one trial. We support three attacking methods, including FGSM, I-FGSM and MI-FGSM.

sh adv.sh

Visualizations

Visualizations of Adversarial Examples

Visualizations of Ranking List

Non-targeted Attack

The ranking list of non-targeted attack.

Targeted Attack

The ranking list of targeted attack.

Citation and Contact

If you find the code useful, please cite the following paper

@article{bai2019adversarial,
  title={Adversarial Metric Attack for Person Re-identification},
  author={Bai, Song and Li, Yingwei and Zhou, Yuyin and Li, Qizhu and Torr, Philip HS},
  journal={arXiv preprint arXiv:1901.10650},
  year={2019}
}

If you encounter any problems or have any inquiries, please contact songbai.site@gmail.com or songbai@robots.ox.ac.uk