Skip to content

Code for ECCV 2022 paper "Can Shuffling Video Benefit Temporal Bias Problem: A Novel Training Framework for Temporal Grounding"

Notifications You must be signed in to change notification settings

haojc/ShufflingVideosForTSG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Can Shuffling Video Benefit Temporal Bias Problem for Temporal Grounding

Code for ECCV 2022 paper "Can Shuffling Video Benefit Temporal Bias Problem: A Novel Training Framework for Temporal Grounding" [arxiv].

Installation

We provide the environment file for anaconda.

You can build the conda environment simply by,

conda env create -f environment.yml

Dataset Preparation

Features and Pretrained Models

You can download our features for Charades-STA and ActivityNet Captions and the pretrained models of our method on re-divided splits by links (box drive, google drive, baidu drive code:yfar ).

(For ActivityNet Captions, we extract the i3d features from the original videos using an open implementation of I3D, with stride 16 and fps 16.)

Please put the video feature files 'VID.npy' into the directories data/Charades/i3d_feature and data/ANet/i3d_feature, respectively.

Please put the pretrained models into the directories grounding/ckp/charades_cd and grounding/ckp/anet_cd, respectively.

Word Embeddings

For Charades-STA, we directly provide the word embeddings files in this github repositories. You don't need to do anything else.

For ActivityNet Captions, due to the limitation of the file size of github, you need to download the word embeddings from the (box drive, google drive, baidu drive code:yfar ), and put the word embeddings into the directory data/ANet/words.

Quick Start

conda activate HLTI
cd grounding

Charades-CD

Train:

python train.py --gpu_id=0 --cfg charades_cd_i3d.yml --alias one_name

The checkpoints and prediction results will be saved in grounding/runs/DATASET/

Evaluate:

python test.py --gpu_id=0 --cfg charades_cd_i3d.yml --alias test

You can change the model to be evaluated in the corresponding config file. By default, test.py will use the pre-trained model provided by us.

ActivityNet-CD

Train:

python train.py --gpu_id=0 --cfg anet_cd_i3d.yml --alias one_name

Evaluate:

python test.py --gpu_id=0 --cfg anet_cd_i3d.yml --alias test

About Pretrained Models

We provide the corresponding prediction results, parameter setting, and evaluation result files in grounding/ckp for both datasets.

Baseline

We also provide the implementation of the baseline (QAVE).

Charades-CD

Train:

python train_baseline.py --gpu_id=0 --cfg charades_cd_i3d.yml --alias one_name

Evaluate:

python test_baseline.py --gpu_id=0 --cfg charades_cd_i3d.yml --alias test

Please determine the model to be evaluated in the corresponding config file.

ActivityNet-CD

Train:

python train_baseline.py --gpu_id=0 --cfg anet_cd_i3d.yml --alias one_name

Evaluate:

python test_baseline.py --gpu_id=0 --cfg anet_cd_i3d.yml --alias test

Citation

Please cite our papers if you find them useful for your research.

@inproceedings{hao2022shufflevideos,
  author    = {Hao, Jiachang and Sun, Haifeng and Ren, Pengfei and Wang, Jingyu and Qi, Qi and Liao, Jianxin},
  title     = {Can Shuffling Video Benefit Temporal Bias Problem: A Novel Training Framework for Temporal Grounding},
  booktitle = {European Conference on Computer Vision},
  year      = {2022},
}

The baseline QAVE is:

@article{hao2022qave,
  title={Query-Aware Video Encoder for Video Moment Retrieval},
  author={Hao, Jiachang and Sun, Haifeng and Ren, Pengfei and Wang, Jingyu and Qi, Qi and Liao, Jianxin},
  journal={Neurocomputing},
  year={2022},
  publisher={Elsevier}
}

About

Code for ECCV 2022 paper "Can Shuffling Video Benefit Temporal Bias Problem: A Novel Training Framework for Temporal Grounding"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages