Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Exploiting Temporal Relationships in Video Moment Localization with Natural Language

Songyang Zhang, Jinsong Su, Jiebo Luo, Exploiting Temporal Relationships in Video Moment Localization with Natural Language, In ACM Multimedia 2019

arxiv preprint

Introduction

Moment localization with temporal language aims to locate a segment in a video referred to by temporal language, which describes relationships between multiple events in a video. It requires the model to be capable of localizing a single event and reasoning among multiple events. In the right figure, for example, the description kitten paws at before the bottle is dropped is composed of a main event, kitten paws at the bottle, a context event the bottle is dropped, and their temporal ordering before.

In this work,

  • We propose a novel model called Temporal Compositional Modular Network (TCMN) that first learns to softly decompose asentence into three descriptions with respect to the main event, context event and temporal signal, and then guides cross-modal feature matching by measuring the visual similarity and location similarity between each segment and the decomposed descriptions.
  • we further form an ensemble model to handle multiple events that may reflect on different visual modalities.

Main Results

Main result on TEMPO-HL
DiDeMo Before After Then While Average
R@1 mIoU R@1 mIoU R@1 mIoU R@1 mIoU R@1 mIoU R@1 R@5 mIoU
28.77 42.37 35.47 59.28 17.91 40.79 20.47 50.78 18.81 42.95 24.29 76.98 47.24
Main result on TEMPO-TL
DiDeMo Before After Then Average
R@1 mIoU R@1 mIoU R@1 mIoU R@1 mIoU R@1 R@5 mIoU
28.90 41.03 37.68 44.78 32.61 42.77 31.16 55.46 32.85 78.73 46.01

Quick Start

Prerequisites

There are a few dependencies to run the code. The major libraries we use are

The codes are written in Python3.

Data Preparation

All video features are provided by DiDeMo and TEMPO. Please download the feature under their instructions.

You can also run setup.sh to have a quick setup.

Training Single Temporal Compositional Modular Network

CUDA_VISIBLE_DEVICES=0 python train.py -feature_type_0 rgb -feature_type_1 rgb -dataset_name TEMPO_HL -gpu 0 -vis_hidden_size 500 -lang_hidden_size 600 -att_hidden_size 250 -hidden_size 250 -batch_size 16 -verbose
CUDA_VISIBLE_DEVICES=1 python train.py -feature_type_0 flow -feature_type_1 rgb -dataset_name TEMPO_HL -gpu 0 -vis_hidden_size 500 -lang_hidden_size 600 -att_hidden_size 250 -hidden_size 250 -batch_size 16 -verbose
CUDA_VISIBLE_DEVICES=2 python train.py -feature_type_0 rgb -feature_type_1 flow -dataset_name TEMPO_HL -gpu 0 -vis_hidden_size 500 -lang_hidden_size 600 -att_hidden_size 250 -hidden_size 250 -batch_size 16 -verbose
CUDA_VISIBLE_DEVICES=3 python train.py -feature_type_0 flow -feature_type_1 flow -dataset_name TEMPO_HL -gpu 0 -vis_hidden_size 500 -lang_hidden_size 600 -att_hidden_size 250 -hidden_size 250 -batch_size 16 -verbose

Testing Single Temporal Compositional Modular Network

Our model is provided here.

Please download them first, unzip to the checkpoints folder, and then run the following command:

CUDA_VISIBLE_DEVICES=0 python test.py -feature_type_0 rgb -feature_type_1 rgb -batch_size 16 -hidden_size 250 -att_hidden_size 250 -vis_hidden_size 500 -lang_hidden_size 500 -dataset_name TEMPO_HL -split test -verbose
CUDA_VISIBLE_DEVICES=1 python test.py -feature_type_0 flow -feature_type_1 rgb -batch_size 16 -hidden_size 250 -att_hidden_size 250 -vis_hidden_size 500 -lang_hidden_size 500 -dataset_name TEMPO_HL -split test -verbose
CUDA_VISIBLE_DEVICES=2 python test.py -feature_type_0 rgb -feature_type_1 flow -batch_size 16 -hidden_size 250 -att_hidden_size 250 -vis_hidden_size 500 -lang_hidden_size 500 -dataset_name TEMPO_HL -split test -verbose
CUDA_VISIBLE_DEVICES=3 python test.py -feature_type_0 flow -feature_type_1 flow -batch_size 16 -hidden_size 250 -att_hidden_size 250 -vis_hidden_size 500 -lang_hidden_size 500 -dataset_name TEMPO_HL -split test -verbose

The result of the testing set will be output to the results folder.

You can also modify the model path in test.py to your trained model.

Model Ensemble

Run the following command to get the ensemble result:

python late_fusion.py

About

Codes for our ACM MM 2019 paper: "Exploiting Temporal Relationships in Video Moment Localization with Natural Language"

Resources

Releases

No releases published

Packages

No packages published
You can’t perform that action at this time.