Skip to content

nigelyaoj/self-supervised-amodal-video-object-segmentation

 
 

Repository files navigation

SaVos

This is the official implementation of the NeurIPS'22 paper Self-supervised Amodal Video Object Segmentation . The code was implemented by Jian Yao, Yuxin Hong and Jianxiong Gao during their internship at the AWS Shanghai AI Lab.

avatar

The FishBowl dataset originates from Unsupervised Object Learning via Common Fate. In this repo, we provide the checkpoint and 1000 videos for testing. The train and test video data in this dataset are generated from the same script with different seeds using the open-sourced code. The data provided includes raw video data, predicted visible masks obtained by PointTrack, and flow obtained by Flownet2.

Set up

pip install -r requirement.txt

FishBowl

Download FishBowl data

Down the test data and checkpoint.

Download the csv for evaluation

We currently filter the data (e.g. filtered by Occ rate as described in paper) and write into csv, to evalute, please download the test file

mv PATH_TO_TEST_DATA VideoAmodal/FishBowl/FishBowl_dataset/data/test_data
mv PATH_TO_CHECKPOINT VideoAmodal/FishBowl/log_bidirectional_consist_next_vm_label_1.5bbox_finalconsist/best_model.pt
mv PATH_TO_TEST_FILES VideoAmodal/FishBowl/test_files

Inference

cd FishBowls
TRAIN_METHOD="bidirectional_consist_next_vm_label_1.5bbox_finalconsist"
python -m torch.distributed.launch --nproc_per_node=4 \
main.py --mode test --training_method ${TRAIN_METHOD} \
--log_path log_${TRAIN_METHOD} --device cuda --batch_size 1 \
--data_path "" --num_workers 2 --loss_type BCE \
--enlarge_coef 1.5

Training

If you generate the training data (raw video data, flow and predict visible mask), you can train by:

cd FishBowl
TRAIN_METHOD="bidirectional_consist_next_vm_label_1.5bbox_finalconsist"
python -m torch.distributed.launch --nproc_per_node=4 \
main.py --mode train --training_method ${TRAIN_METHOD} \
--log_path log_${TRAIN_METHOD} --device cuda --batch_size 3 \
--data_path "" --num_workers 2 --loss_type BCE --verbose \
--enlarge_coef 1.5 2>&1 | tee log_${TRAIN_METHOD}.log

Kins-Car

Download Kitti & Kins data

Download the data .

mv PATH_TO_KINS_VIDEO_CAR Kins_Car/dataset/data

Training

cd Kins_Car
TRAIN_METHOD="bidirectional_consist_next_vm_label_1.5bbox_finalconsist"
python -m torch.distributed.launch --nproc_per_node=4 \
main.py --mode train --training_method ${TRAIN_METHOD} \
--log_path log_${TRAIN_METHOD} --device cuda --batch_size 2 \
--data_path "" --num_workers 2 --loss_type BCE --verbose \
--enlarge_coef 1.5 2>&1 | tee log_${TRAIN_METHOD}.log

Inference

cd Kins_Car
TRAIN_METHOD="bidirectional_consist_next_vm_label_1.5bbox_finalconsist"
python -m torch.distributed.launch --nproc_per_node=1 test.py --training_method ${TRAIN_METHOD}

Evaluation

cd Kins_Car
python eval.py

Visualization

cd Kins_Car
python run_video_res.py

Chewing Gum Dataset

For whom are interested in the synthetic dataset, we also provide the script to generate the Chewing Gum Dataset in utils/gen_chewgum.py.

About

Official Implementation for Savos

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 63.9%
  • Jupyter Notebook 36.1%