Skip to content

Creating and defending against adversarial examples

Notifications You must be signed in to change notification settings

Milkigit/adversarial

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

adversarial

This repository contains PyTorch code to create and defend against adversarial attacks.

See this Medium article for a discussion on how to use and defend against the projected gradient attack.

Example adversarial attack created using this repo.

PGD Attack

Cool fact - adversarially trained discriminative (not generative!) models can be used to interpolate between classes by creating large-epsilon adversarial examples against them.

MNIST Class Interpolation

Contents

  • A Jupyter notebook demonstrating how to use and defend against the projected gradient attack (see notebooks/)

  • adversarial.functional contains functional style implementations of a view different types of adversarial attacks

    • Fast Gradient Sign Method - white box - batch implementation
    • Projected Gradient Descent - white box - batch implementation
    • Local-search attack - black box, score-based - single image
    • Boundary attack - black box, decision-based - single imagae

Setup

Requirements

Listed in requirements.txt. Install with pip install -r requirements.txt preferably in a virtualenv.

Tests (optional)

Run pytest in the root directory to run all tests.

About

Creating and defending against adversarial examples

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 96.1%
  • Python 3.9%