Skip to content

xiaozhanguva/Intrinsic-Rob

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 

Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

A repository for understanding the intrinsic robustness limits for robust learning against adversarial examples. Created by Xiao Zhang and Jinghui Chen. Link to the ArXiv paper.

The goals of this project are to:

  1. Theoretically, derive an intrinsic robustness bound with respect to L2 perturbations, for any input distribution that can be captured by some conditional generative model.

  2. Empirically, evaluate the intrinsic robustness bound for various synthetically generated image distributions, with comparisons to the robustness of SOTA robust classifiers.

Installation

The code was developed using Python3 on Anaconda.

  • Install Pytorch 1.1.0:

    conda install pytorch==1.1.0 torchvision==0.3.0 -c pytorch
    
  • Install other dependencies:

    pip install -r requirements.txt
    

Examples for MNIST experiments

  1. Train an ACGAN model using the original MNIST dataset:

    python build_generator_mnist.py --gan-type ACGAN --mode train
    
  2. Estimate the local Lipschitz constant and reconstruct MNIST dataset using ACGAN:

    python build_generator_mnist.py --gan-type ACGAN --mode evaluate
    
    python build_generator_mnist.py --gan-type ACGAN --mode reconstruct
    
  3. Train various robust classifiers under L2 perturbations on the generated MNIST dataset:

    cd train_classifiers && python train_mnist.py --method zico
    
  4. Evaluate unconstrained and/or in-distribution robustness for the trained classifiers:

    python test_robustness_mnist.py --method madry --robust-type in
    

Examples for ImageNet10 Experiments

  1. Dowload the pretrained BigGAN model, estimate Lipschitz and reconstruct ImageNet10:

    python build_generator_imagenet.py --mode evaluate
    
    python build_generator_imagenet.py --mode reconstruct
    
  2. Train various robust classifiers under L2 perturbations on the generated ImageNet10:

    cd train_classifier && python train_imagenet.py --method trades
    
  3. Evaluate unconstrained and/or in-distribution robustness for the trained classifiers:

    python test_robustness_imagenet.py --method trades --robust-type unc
    

What is in this Respository?

  • Folder geneartive, including:

    • src: folder that contains functions for building BigGAN
    • acgan.py, gan.py: functions for training MNIST generative models
    • biggan.py: neural network architecture for BigGAN generator
    • utils.py: auxiliary functions for generative models
  • Folder train_classifer, including:

    • adv_loss.py: adversarial loss functions for Madry and TRADES
    • attack.py: functions for generating unc/in-dist adversarial examples
    • problem.py: define datasets, dataloaders and model architectures
    • trainer.py: implements the train and evaluation functions using different methods
    • train_mnist.py, train_imagenet.py: main functions for training classifiers on generated MNIST and ImageNet10
  • build_generator_mnist.py, build_generator_mnist.py: main functions for generating image datasets

  • test_robustness_mnist.py, test_robustness_imagenet.py: main functions for evaluating adversarial robustness

About

Studying the Intrinsic Robustness Limits for Robust Learning using Conditional Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages