Skip to content

nankepan/VIPMT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VIPMT

This is the implementation of our paper: Multi-grained Temporal Prototype Learning for Few-shot Video Object Segmentation that has been accepted to IEEE International Conference on Computer Vision (ICCV) 2023.

Environment

conda create -n VIPMT python=3.6
conda activate VIPMT
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
conda install opencv cython
pip install easydict imgaug

Usage

Preparation

  1. Download the 2019 version of Youtube-VIS dataset.
  2. Download VSPW 480P dataset.
  3. Put the dataset in the ./data folder.
data
└─ Youtube-VOS
    └─ train
        └─ Annotations
        └─ JPEGImages
        └─ train.json
└─ VSPW_480p
    └─ data
  1. Install cocoapi for Youtube-VIS.
  2. Download the ImageNet pretrained backbone and put it into the pretrain_model folder.
pretrain_model
└─ resnet50_v2.pth
  1. Update config/config.py.

Training

python train.py --group 1 --batch_size 4

Inference

python test.py --group 1

References

Part of the code is based upon: IPMT, DANet. Thanks for their great work!

About

2023ICCV: Multi-grained Temporal Prototype Learning for Few-shot Video Object Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages