Skip to content

[CVPR 2023 Workshop] VAND Challenge: 1st Place on Zero-shot AD and 4th Place on Few-shot AD

Notifications You must be signed in to change notification settings

ByChelsea/VAND-APRIL-GAN

Repository files navigation

Xuhai Chen · Yue Han · Jiangning Zhang

This repository contains the official PyTorch implementation of Zero-/Few-shot Anomaly Classification and Segmentation Method used in the CVPR 2023 VAND Challenge, which can be viewd as an improved version of WinCLIP. We achieve Winner in the Zero-shot Track and Honorable Mentions in the Few-shot Track.

Model Structure

Results on the Challenge official test set

Model Structure

Installation

  • Prepare experimental environments

    pip install -r requirements.txt

Dataset Preparation

MVTec AD

  • Download and extract MVTec AD into data/mvtec
  • runpython data/mvtec.py to obtain data/mvtec/meta.json
data
├── mvtec
    ├── meta.json
    ├── bottle
        ├── train
            ├── good
                ├── 000.png
        ├── test
            ├── good
                ├── 000.png
            ├── anomaly1
                ├── 000.png
        ├── ground_truth
            ├── anomaly1
                ├── 000.png

VisA

  • Download and extract VisA into data/visa
  • runpython data/visa.py to obtain data/visa/meta.json
data
├── visa
    ├── meta.json
    ├── candle
        ├── Data
            ├── Images
                ├── Anomaly
                    ├── 000.JPG
                ├── Normal
                    ├── 0000.JPG
            ├── Masks
                ├── Anomaly
                    ├── 000.png

Train

Set parameters in train.sh.

  • train_data_path: the path to the training dataset
  • dataset: name of the training dataset, optional: mvtec, visa
  • model: the CLIP model
  • pretrained: the pretrained weights
  • features_list: features to be mapped into the joint embedding space
  • image_size: the size of the images inputted into the CLIP model
  • aug_rate: the probability of stitching images (only for mvtec)

Then run the following command

sh train.sh

Test

Pretrained Models

We provide our pre-trained models in exps/pretrained, where mvtec_pretrained.pth represents the model trained on the MVTec AD dataset and visa_pretrained.pth represents the model trained on the VisA dataset.

Set parameters in test_zero_shot.sh.

  • data_path: the path to the test dataset
  • dataset: name of the test dataset, optional: mvtec, visa
  • checkpoint_path: the path to the test model

Then, run the following command to test them in the zero-shot setting:

sh test_zero_shot.sh

Set parameters in test_few_shot.sh.

  • data_path: the path to the test dataset
  • dataset: name of the test dataset, optional: mvtec, visa
  • checkpoint_path: the path to the test model
  • k_shot: different number of reference images

Then, run the following command to test them in the few-shot setting:

sh test_few_shot.sh

Zero-shot Setting

Set parameters in test_zero_shot.sh.

  • data_path: the path to the test dataset
  • dataset: name of the test dataset, optional: mvtec, visa
  • checkpoint_path: the path to the test model
  • model: the CLIP model
  • pretrained: the pretrained weights
  • features_list: features to be mapped into the joint embedding space
  • image_size: the size of the images inputted into the CLIP model
  • mode: zero shot or few shot

Then run the following command

sh test_zero_shot.sh

Few-shot Setting

Set parameters in test_few_shot.sh.

  • data_path: the path to the test dataset
  • dataset: name of the test dataset, optional: mvtec, visa
  • checkpoint_path: the path to the test model
  • model: the CLIP model
  • pretrained: the pretrained weights
  • features_list: features to be mapped into the joint embedding space
  • few_shot_features: features stored in the memory banks
  • image_size: the size of the images inputted into the CLIP model
  • mode: zero shot or few shot
  • k_shot: different number of reference images
  • seed: the random seed

Then run the following command

sh test_few_shot.sh

Citation

If our work is helpful for your research, please consider citing:

@article{chen2023zero,
  title={A Zero-/Few-Shot Anomaly Classification and Segmentation Method for CVPR 2023 VAND Workshop Challenge Tracks 1\&2: 1st Place on Zero-shot AD and 4th Place on Few-shot AD},
  author={Chen, Xuhai and Han, Yue and Zhang, Jiangning},
  journal={arXiv preprint arXiv:2305.17382},
  year={2023}
}

Acknowledgements

We thank WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation for providing assistance for our research.