Skip to content
Im2Flow: Motion Hallucination from Static Images for Action Recognition (CVPR 2018)
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data data loader May 29, 2018
demo_images upload code May 29, 2018
model upload code May 29, 2018
util upload code May 29, 2018
README.md Update README.md Sep 4, 2018
combine_A_and_B.py upload code May 29, 2018
models.lua upload code May 29, 2018
test.lua upload code May 29, 2018
train.lua upload code May 29, 2018
visualizeFlow.py upload code May 29, 2018

README.md

Im2FLow: Motion Hallucination from Static Images for Action Recognition

Im2Flow: Motion Hallucination from Static Images for Action Recognition: http://vision.cs.utexas.edu/projects/im2flow [Project Page] [arXiv]

This repository contains the code for our CVPR 2018 paper Im2Flow. The code is heavily borrowed from Phillip Isola's pix2pix Implementation.

If you find our code or project useful in your research, please cite:

    @inproceedings{gao2018im2flow,
      title={Im2Flow: Motion Hallucination from Static Images for Action Recognition},
      author={Gao, Ruohan and Xiong, Bo and Grauman, Kristen},
      booktitle={CVPR},
      year={2018}
    }

1) Preparation

  1. Install Torch: http://torch.ch/docs/getting-started.html

  2. Clone the repository

git clone https://github.com/rhgao/Im2Flow.git
  1. Download our pre-trained model using the following script. This model is trained on the UCF-101 dataset.
bash model/download_model.sh Im2Flow

2) Training

Put video frames under directory /YOUR_TRAINING_DATA_ROOT/A and the corresponding ground-truth flow images under directory /YOUR_TRAINING_DATA_ROOT/B. Once the data is formatted this way, use the following script to generate paired training data:

python combine_A_and_B.py --fold_A /YOUR_TRAINING_DATA_ROOT/A --fold_B /YOUR_TRAINING_DATA_ROOT/B --fold_AB /YOUR_TRAINING_DATA_ROOT/AB

Download the pre-trained motion content loss network:

bash model/download_model.sh resnet-18_motion

Use the following command to train Im2Flow network:

DATA_ROOT=/YOUR_TRAINING_DATA_ROOT/AB name=flow_UCF_train continue_train=0 save_display_freq=5000 which_direction=AtoB loadSize=286 fineSize=256 batchSize=32 lr=0.0002 print_freq=20 niter=30 save_epoch_freq=5 decay_epoch_freq=10 save_latest_freq=2000 use_GAN=0 lambda_L2=50 lambda_ContentLoss=1 th train.lua |& tee train.log

3) Flow Prediction

DATA_ROOT=demo_images model_path=model/Im2Flow.t7 th test.lua

4) Flow Visualization

python visualizeFlow.py --flowImgInputDir results/output/ --rgbImgDir results/input/ --arrowImgOutDir visualization
You can’t perform that action at this time.