Skip to content
/ V-PETL Public

Towards a Unified View on Visual Parameter-Efficient Transfer Learning

License

Notifications You must be signed in to change notification settings

bruceyo/V-PETL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

V-PETL: A Unified View of Visual PETL Techniques

teaser

This is a PyTorch implementation of the paper Towards a Unified View on Visual Parameter-Efficient Transfer Learning

Bruce X.B. Yu1, Jianlong Chang2, Lingbo Liu1, Qi Tian2, Chang Wen Chen1*

1The Hong Kong Polytechnic University, 2Huawei Inc.

*denotes the corresponding author

Usage

Install

  • Geforce 3090 (24G): CUDA 11.4+, PyTorch 1.13.0 + torchvision 0.14.0
  • timm 0.4.8
  • einops
  • easydict

Data Preparation

See DATASET.md.

Prepare Pre-trained Checkpoints

We use Swin-B pre-trained on Kinetics-400 and Kinetics-600. Pre-trained models are available at Swin Video Tansformer. Put them to the folder ./pre_trained.

Training

Start

CUDA_VISIBLE_DEVICES=3 torchrun --standalone --nnodes=1 \
    --nproc_per_node=1 --master_port=22253 \
    main.py \
    --num_frames 8 \
    --sampling_rate 2 \
    --model swin_transformer \
    --finetune pre_trained/swin_base_patch244_window877_kinetics400_22k.pth \
    --output_dir output \
    --tuned_backbone_layer_fc True \
    --batch_size 16 --epochs 70 --blr 0.1 --weight_decay 0.0 --dist_eval \
    --data_path /media/bruce/ssd1/data/hmdb51 --data_set HMDB51 \
    --ffn_adapt \
    --att_prefix \
    --att_preseqlen 16 \
    --att_mid_dim 128 \
    --att_prefix_mode patt_kv \
    --att_prefix_scale 0.8 \

Acknowledgement

The project is based on PETL, Video Swin Transformer, AdaptFormer. Thanks for their awesome works.

Citation

@article{yu2022vpetl,
      title={Towards a Unified View on Visual Parameter-Efficient Transfer Learning},
      author={Yu, Bruce X.B. and Chang, Jianlong and Liu, Lingbo and Tian, Qi and Chen, Chang Wen},
      journal={arXiv preprint arXiv:2210.00788},
      year={2022}
}

License

This project is under the MIT license. See LICENSE for details.

About

Towards a Unified View on Visual Parameter-Efficient Transfer Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages