Skip to content

Repository of Vision Transformer with Deformable Attention (CVPR2022) and DAT++: Spatially Dynamic Vision Transformerwith Deformable Attention

Notifications You must be signed in to change notification settings

LeapLabTHU/DAT-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vision Transformer with Deformable Attention

This repository contains the code of semantic segmentation for the paper Vision Transformer with Deformable Attention [arXiv], and DAT++: Spatially Dynamic Vision Transformerwith Deformable Attention (extended version)[arXiv].

This code is based on mmsegmentation and Swin Segmentation. To get started, you can follow the instructions in Swin Transformer.

Other links:

Dependencies

In addition to the dependencies of the classification codebase, the following packages are required:

  • mmcv-full == 1.4.0
  • mmsegmentation == 0.29.0

Evaluating Pretrained Models

SemanticFPN

Backbone Schedule mIoU mIoU+MS config pretrained weights
DAT-T++ 80K 48.4 48.8 config OneDrive / TsinghuaCloud
DAT-S++ 80K 49.9 50.7 config OneDrive / TsinghuaCloud
DAT-B++ 80K 50.4 51.1 config OneDrive / TsinghuaCloud

UperNet

Backbone Schedule mIoU mIoU+MS config pretrained weights
DAT-T++ 160K 49.4 50.3 config OneDrive / TsinghuaCloud
DAT-S++ 160K 50.5 51.2 config OneDrive / TsinghuaCloud
DAT-B++ 160K 51.0 51.5 config OneDrive / TsinghuaCloud

To evaluate a pretrained checkpoint, please download the pretrain weights to your local machine and run the mmsegmentation test scripts as follows:

# single-gpu testing
python tools/test.py <CONFIG_FILE> <SEG_CHECKPOINT_FILE> --eval mIoU

# multi-gpu testing
bash tools/dist_test.sh <CONFIG_FILE> <SEG_CHECKPOINT_FILE> <GPU_NUM> --eval mIoU

# multi-gpu, MS testing
bash tools/dist_test.sh <CONFIG_FILE> <SEG_CHECKPOINT_FILE> <GPU_NUM> --aug-test --eval mIoU

Please notice: Before training or evaluation, please set the data_root variable in configs/_base_/datasets/ade20k.py to the path where ADE20K data stores.

Since evaluating models needs no pretrain weights, you can set the pretrained = None in <CONFIG_FILE>.

Training

To train a segmentor with pre-trained models, run:

# single-gpu training
python tools/train.py <CONFIG_FILE>

# multi-gpu training
bash tools/dist_train.sh <CONFIG_FILE> <GPU_NUM> 

Please notice: Make sure the pretrained variable in <CONFIG_FILE> is correctly set to the path of pretrained DAT model.

Acknowledgements

This code is developed on the top of Swin Transformer, we thank to their efficient and neat codebase. The computational resources supporting this work are provided by Hangzhou High-Flyer AI Fundamental Research Co.,Ltd.

Citation

If you find our work is useful in your research, please consider citing:

@article{xia2023dat,
    title={DAT++: Spatially Dynamic Vision Transformer with Deformable Attention}, 
    author={Zhuofan Xia and Xuran Pan and Shiji Song and Li Erran Li and Gao Huang},
    year={2023},
    journal={arXiv preprint arXiv:2309.01430},
}

@InProceedings{Xia_2022_CVPR,
    author    = {Xia, Zhuofan and Pan, Xuran and Song, Shiji and Li, Li Erran and Huang, Gao},
    title     = {Vision Transformer With Deformable Attention},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {4794-4803}
}

Contact

If you have any questions or concerns, please send email to xzf23@mails.tsinghua.edu.cn.

About

Repository of Vision Transformer with Deformable Attention (CVPR2022) and DAT++: Spatially Dynamic Vision Transformerwith Deformable Attention

Topics

Resources

Stars

Watchers

Forks