Skip to content

Pytorch implementation of Adversarially Robust Distillation (ARD)

License

Notifications You must be signed in to change notification settings

lliai/AdversariallyRobustDistillation

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Adversarially Robust Distillation (ARD): PyTorch implementation

This repository contains PyTorch code for the ARD method from "Adversarially Robust Distillation" by Micah Goldblum, Liam Fowl, Soheil Feizi, and Tom Goldstein.

Adversarially Robust Distillation is a method for transferring robustness from a robust teacher network to the student network during distillation. In our experiments, small ARD student models outperform adversarially trained models with identical architecture.

Prerequisites

  • Python3
  • Pytorch
  • CUDA

Run

Here is an example of how to run our program:

$ python main.py --teacher_path INSERT-YOUR-TEACHER-PATH

Want to attack ARD?

A MobileNetV2 ARD model distilled from a TRADES WideResNet (34-10) teacher on CIFAR-10 can be found here.

About

Pytorch implementation of Adversarially Robust Distillation (ARD)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%