Skip to content

2. Implemented Methods

Jingkang Yang edited this page Aug 21, 2022 · 1 revision

Overview

All the supported methodolgies can be placed in the following four categories.

density   reconstruction   classification   distance

We also note our supported methodolgies with the following tags if they have special designs in the corresponding steps, compared to the standard classifier training process.

preprocess   extradata   training   postprocess

Anomaly Detection

DeepSVDD (ICML'18)

distance     training   postprocess

A Discriminatively Trained Reconstruction Embedding for Surface Anomaly Detection

paper   code

Method Description

  • Pretrain: During this stage, we pretrain a deep convolutional autoencoders (dcae) for anomaly detection.
  • Train: During this stage, firstly we load the pretrained dcae into the net, then we proceed to train the model. The output includes net and two hyperparameters C and R which represent center and radius of the hypersphere.
  • Test: During this stage, we evaluate our method with metric auroc.

OpenOOD Implementation

  • train_ad_pipeline.py: Pretrain the dcae
  • train_dsvdd_pipeline.py:Train our model and finally test our model
  • dsvdd_net.py: Define dcae net and dsvdd net
  • dsvdd_trainer.py: Trainer of dcae and dsvdd
  • dsvdd_evaluator.py: Evaluator of dcae and dsvdd

Script

  • pretrain dcase:sh scripts/a_anomaly/0_dsvdd_pretrain.sh
  • train dsvdd:sh scripts/a_anomaly/0_dsvdd_train.sh

Result

  • Note: In the original code of dsvdd, the train dataset is normalized with special means and stds. For example, when normal dataset is cifar10-3, normalization dict: [-31.7975, -31.7975, -31.7975], [42.8907, 42.8907, 42.8907]. Furthermore, a global_contrast_normalization method is used in the transform. So the ideal result is displayed below.
Normal class 3 3
Method DCAE DSVDD
Expected AUROC 58.40 59.10
AUROC 63.43 60.44

KDAD (arXiv'20)

distance     training  postprocess

Multiresolution Knowledge Distillation for Anomaly Detection

paper   code    

Overview:

  • train: During the training stage, we introduce two vgg networks, one of which called source network is pretrained. For each training epoch, when id data is input, the differences in special layers between clone nwtwork and source network are obtained and loss is computed. By SGD, the clone network is optimised.
  • test: During the testing stage, we evaluate the method of anomaly detection by roc_auc.

Keypoints:

  • train_ad_pipeline.py: training stage
  • ad_test_pipeline.py: testing stage
  • kdad_trainer.py: trainer
  • kdad_evaluator.py: evaluator
  • vggnet.py: source and clone network
  • kdad_recorder.py: recorder
  • kdad_losses.py: define loss function

Script

  • kdad_train: sh scripts/a_anomaly/1_kdad_train.sh
  • kdad_detection_test: sh scripts/a_anomaly/1_kdad_test_det.sh

Result

Normal class 3
Expected AUROC 77.02
AUROC 86.08

CutPaste (CVPR'21)

density       preprocess   postprocess

Title: A Discriminatively Trained Reconstruction Embedding for Surface Anomaly Detection

paper   code

Method Description

  • Overview: Dream means discriminatively trained reconstruction anomaly embadding model.
  • Model Architecture: Dream is composed from a reconstructive and a discriminative sub-networks.
    • The reconstructive sub-network is formulated as an encoder-decoder architecture that converts the local patterns of an input image into patterns closer to the distrubution of normal smaples.
    • The discriminative sub-network uses U-net like architecture. The input of discriminative sub-network is the channel wise concatenation of the reconsturctive sub-network and the origin input image.
  • Training: Dream also introduced an method to generate anomalous images used for training, which involves noise image generated by Perlin noise generator and various anomaly source.
  • Inference:

OpenOOD Implementation

  • train_ad_pipeline.py: train pipeline used for Draem
  • test_ad_pipeline.py: test pipeline used for Draem
  • draem_preprocessor.py: preprocessor for Drame include the new augmentation method
  • draem_networks.py: define both sub-networks for Draem
  • draem_loss.py: define the loss functions needed for training Draem
  • draem_evaluator.py: define the evaluation method evaluate on both good samples and anomoly samples

Script

sh code

Result

AUROC AP
bottle 99.2 / 99.1 90.7 / 86.5
carpet 97.0 / 95.5 63.8 / 53.5
leather 97.9 / 98.6 70.2 / 75.3

PatchCore (arXiv'21)

distance       training   postprocess

Title: Towards Total Recall in Industrial Anomaly Detection

paper   code

Method Description

  • Overview: Dream means discriminatively trained reconstruction anomaly embadding model.
  • Model Architecture: Dream is composed from a reconstructive and a discriminative sub-networks.
    • The reconstructive sub-network is formulated as an encoder-decoder architecture that converts the local patterns of an input image into patterns closer to the distrubution of normal smaples.
    • The discriminative sub-network uses U-net like architecture. The input of discriminative sub-network is the channel wise concatenation of the reconsturctive sub-network and the origin input image.
  • Training: Dream also introduced an method to generate anomalous images used for training, which involves noise image generated by Perlin noise generator and various anomaly source.
  • Inference:

OpenOOD Implementation

  • train_ad_pipeline.py: train pipeline used for Draem
  • test_ad_pipeline.py: test pipeline used for Draem
  • draem_preprocessor.py: preprocessor for Drame include the new augmentation method
  • draem_networks.py: define both sub-networks for Draem
  • draem_loss.py: define the loss functions needed for training Draem
  • draem_evaluator.py: define the evaluation method evaluate on both good samples and anomoly samples

Script

sh code

Result

AUROC AP
bottle 99.2 / 99.1 90.7 / 86.5
carpet 97.0 / 95.5 63.8 / 53.5
leather 97.9 / 98.6 70.2 / 75.3

DRAEM (ICCV'21)

reconstruction       preprocess   training   postprocess

Title: A Discriminatively Trained Reconstruction Embedding for Surface Anomaly Detection

paper   code

Method Description

  • Overview: Dream means discriminatively trained reconstruction anomaly embadding model.
  • Model Architecture: Dream is composed from a reconstructive and a discriminative sub-networks.
    • The reconstructive sub-network is formulated as an encoder-decoder architecture that converts the local patterns of an input image into patterns closer to the distrubution of normal smaples.
    • The discriminative sub-network uses U-net like architecture. The input of discriminative sub-network is the channel wise concatenation of the reconsturctive sub-network and the origin input image.
  • Training: Dream also introduced an method to generate anomalous images used for training, which involves noise image generated by Perlin noise generator and various anomaly source.
  • Inference:

OpenOOD Implementation

  • train_ad_pipeline.py: train pipeline used for Draem
  • test_ad_pipeline.py: test pipeline used for Draem
  • draem_preprocessor.py: preprocessor for Drame include the new augmentation method
  • draem_networks.py: define both sub-networks for Draem
  • draem_loss.py: define the loss functions needed for training Draem
  • draem_evaluator.py: define the evaluation method evaluate on both good samples and anomoly samples

Script

sh code

Result

AUROC AP
bottle 99.2 / 99.1 90.7 / 86.5
carpet 97.0 / 95.5 63.8 / 53.5
leather 97.9 / 98.6 70.2 / 75.3

Open Set Recognition

OpenMax (CVPR'16)

classification paper   code

ARPL (TPAMI'21)

distance paper   code

OpenGAN (ICCV'21)

classification paper   code


Out-of-Distribution Detection

MSP (ICLR'17)

classification paper   code

ODIN (ICLR'18)

classification paper   code

MDS (NeurIPS'18)

classification paper   code

ConfBranch (arXiv'18)

classification paper   code

G-ODIN (CVPR'20)

classification paper   code

Gram (ICML'20)

DUQ (ICML'20)

CSI (NeurIPS'20)

EBO (NeurIPS'20)

MOS (CVPR'21)

GradNorm (NeurIPS'21)

ReAct (NeurIPS'21)

VOS (ICLR'22)

VIM (CVPR'22)

SEM (arXiv'22)

MLS (arXiv'22)