Skip to content

pokaxpoka/deep_Mahalanobis_detector

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

This project is for the paper "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks". Some codes are from odin-pytorch, LID, and adversarial_image_defenses.

Preliminaries

It is tested under Ubuntu Linux 16.04.1 and Python 3.6 environment, and requries Pytorch package to be installed:

Downloading Out-of-Distribtion Datasets

We use download links of two out-of-distributin datasets from odin-pytorch:

Please place them to ./data/.

Downloading Pre-trained Models

We provide six pre-trained neural networks (1) three DenseNets trained on CIFAR-10, CIFAR-100 and SVHN, where models trained on CIFAR-10 and CIFAR-100 are from odin-pytorch, and (2) three ResNets trained on CIFAR-10, CIFAR-100 and SVHN.

Please place them to ./pre_trained/.

Detecting Out-of-Distribution Samples (Baseline and ODIN)

# model: ResNet, in-distribution: CIFAR-10, gpu: 0
python OOD_Baseline_and_ODIN.py --dataset cifar10 --net_type resnet --gpu 0

Detecting Out-of-Distribution Samples (Mahalanobis detector)

1. Extract detection characteristics:

# model: ResNet, in-distribution: CIFAR-10, gpu: 0
python OOD_Generate_Mahalanobis.py --dataset cifar10 --net_type resnet --gpu 0

2. Train simple detectors:

# model: ResNet
python OOD_Regression_Mahalanobis.py --net_type resnet

Detecting Adversarial Samples (LID & Mahalanobis detector)

0. Generate adversarial samples:

# model: ResNet, in-distribution: CIFAR-10, adversarial attack: FGSM  gpu: 0
python ADV_Samples.py --dataset cifar10 --net_type resnet --adv_type FGSM --gpu 0

1. Extract detection characteristics:

# model: ResNet, in-distribution: CIFAR-10, adversarial attack: FGSM  gpu: 0
python ADV_Generate_LID_Mahalanobis.py --dataset cifar10 --net_type resnet --adv_type FGSM --gpu 0

2. Train simple detectors:

# model: ResNet
python ADV_Regression.py --net_type resnet

About

Code for the paper "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published