Skip to content

jumpsnack/ViT-NeT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

ViT-NeT: Interpretable Vision Transformers with Neural Tree Decoder

PWC PWC PWC PWC PWC

This repository contains PyTorch code for ViT-NeT. It builds on code from the Swin Transformer. fig1

Usage

Install PyTorch 1.7.0+ and torchvision 0.8.1+ and apex:

pip install pytorch pytorch torchvision

Data preparation

Download and extract CUB_200_2011 images from https://drive.google.com/drive/folders/1cnQHqa8BkVx90-6-UojHnbMB0WhksSRc.

/path/to/data_root/
  CUB_200_2011/
    attributes/
    dataset/
      test_crop/
        class1/
          img1.jpg
      test_full/
        class1/
          img1.jpg
      train_corners/
        class1/
          img1.jpg
      train_crop
        class1/
          img1.jpg
  train_test_split.txt

Evaluation

To evaluate ViT-NeT on CUB_200_2011 test set, run:

python main.py --eval --resume /path/to/weight --cfg configs/swin_tree_patch4_window14_448_CUB.yaml

ViT-NeT CUB Pretrained Model GoogleDrive

Training

To train ViT-NeT on CUB_200_2011 on a single node with 4 gpus for 300 epochs run:

python -m torch.distributed.launch --nproc_per_node=4 --use_env main.py --cfg configs/swin_tree_patch4_window14_448_CUB.yaml --batch-size 16

SwinT/B-224-ImageNet-22K Pretrained Model (https://github.com/microsoft/Swin-Transformer)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages