Skip to content

The code of "Adversarial Metric Attack for Person Re-identification"

License

Notifications You must be signed in to change notification settings

SongBaiHust/Adversarial_Metric_Attack

Repository files navigation

Adversarial Metric Attack for Person Re-identification

By Song Bai, Yingwei Li, Yuyin Zhou, Qizhu Li, Philip H.S. Torr.

Introduction

This repository contains the code of the paper Adversarial Metric Attack for Person Re-identification. It proposes adversarial metric attack, a parallel methodology to the existing adversarial classification attack. Adversarial metric attack can be used to attack metric-based systems like person re-identification and generate adversarial examples accordingly.

Prerequisites

  • Pytorch 0.4.1
  • Numpy
  • Python 2.7

How to Run

Attack. For example, attack a ResNet-50 model trained with cross-entropy loss

python Gen_Adv.py \
 --loss_type=soft \
 --name=resnet_50 \
 --save_img \
 --save_fea

It will save the adversarial images and features.

Test.

python evaluate_adv.py \
 --loss_type=soft \
 --name=resnet_50

Shell in one trial. We support three attacking methods, including FGSM, I-FGSM and MI-FGSM.

sh adv.sh

Visualizations

Visualizations of Adversarial Examples

Visualizations of Ranking List

Non-targeted Attack

The ranking list of non-targeted attack.

Targeted Attack

The ranking list of targeted attack.

Citation and Contact

If you find the code useful, please cite the following paper

@article{bai2019adversarial,
  title={Adversarial Metric Attack for Person Re-identification},
  author={Bai, Song and Li, Yingwei and Zhou, Yuyin and Li, Qizhu and Torr, Philip HS},
  journal={arXiv preprint arXiv:1901.10650},
  year={2019}
}

If you encounter any problems or have any inquiries, please contact songbai.site@gmail.com or songbai@robots.ox.ac.uk

About

The code of "Adversarial Metric Attack for Person Re-identification"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages