Skip to content

aliyun/The-Blessings-of-Unlabeled-Background-in-Untrimmed-Videos

Repository files navigation

Causal inference framework for Weakly supervised temporal action localization

Our paper The Blessings of Unlabeled Background in Untrimmed Videos has been accepted by CVPR 2021.

Due to the excellent performance of WUM, the released code is TS-PCA+WUM on the THUMOS dataset.

Recommended Environment

  • Pytorch 1.4
  • numpy, scipy among others

Data Preparation

  • Prepare THUMOS-14 dataset.
    • Three test videos (270, 1292, 1496) are excluded as WUM did.
  • We recommend re-extracting the I3D features yourself using the repo:
    • I3D Features
    • For convenience, we use features provided by the author of WUM directly. The features are here.
  • Place the features inside the dataset folder.
    • The data structure is the same as WUM, which is shown below.
├── dataset
   └── THUMOS14
       ├── gt.json
       ├── split_train.txt
       ├── split_test.txt
       └── features
           ├── train
               ├── rgb
                   ├── video_validation_0000051.npy
                   ├── video_validation_0000052.npy
                   └── ...
               └── flow
                   ├── video_validation_0000051.npy
                   ├── video_validation_0000052.npy
                   └── ...
           └── test
               ├── rgb
                   ├── video_test_0000004.npy
                   ├── video_test_0000006.npy
                   └── ...
               └── flow
                   ├── video_test_0000004.npy
                   ├── video_test_0000006.npy
                   └── ...
  • Place CAS files generated by the WUM model in the WTAL_result_numpy folder. You can train your own WUM model or use the pre-trained model provided by the author of WUM. Please refer to the official WUM repo for more detailed information. The data structure is shown below.
├── WTAL_result_numpy
   ├── video-id_cas.npy
   ├── video-id_feat_act.npy
   ├── video-id_feat_bkg.npy
   ├── video-id_features.npy
   ├── video-id_score.npy
   └── ...

Usage

Running

You could optionally specify your preferred parameters with options.py.

Train and evaluate the TS-PCA confounder:

$ sh run.sh

Evaulation

Test with a pretrained model:

$ sh run_eval.sh

References

We referenced the repos below for the code.

Citation

Please cite the following paper if you feel our paper useful to your research.

@inproceedings{wtal_blessing,
  author    = {Yuan Liu and
               Jingyuan Chen and
               Zhenfang Chen and
               Bing Deng and
               Jianqiang Huang and 
               Hanwang Zhang},
  title     = {The Blessings of Unlabeled Background in Untrimmed Videos},
  booktitle = {CVPR},
  year      = {2021},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published