Skip to content

YueWuHKUST/CVPR2020-FutureVideoSynthesis

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
doc
 
 
 
 
 
 
 
 
 
 

Official implementation of Paper "Future Video Synthesis With Object Motion Prediction"(CVPR 2020)

Enviroment

Python 3

Pytorch1.0.0

Components

There are exists several components in our framework. This repo only contain modified files for Generative Inpainting and Deep-Flow-Guided-Video-Inpainting.

We use PWCNet to compute optical flow. Please put the code under './*/models/' directory

Data preparation

This document illustrates how the data preprocessing is done. I release the test result in next section. If you want the preprocessed test dataset to verify the result, please drop me an email.

data
├── Cityscapes
│   ├── depth
│   ├── dynamic
│   ├── for_val_data
│   ├── instance_upsnet
│   │   └── origin_data
│   │   └── val
│   ├── leftImg8bit_sequence_512p (Citysapes images in 512x1024)
│   │   └── val
│   │   │     └── frankfurt
│   │   │     └── lindau
│   │   │     └── munster
│   ├── non_rigid_mask
│   ├── semantic 
│   ├── small_object_mask
├── Kitti
│   ├── depth
│   ├── dynamic
│   ├── for_val_data
│   ├── instance_upsnet
│   │   └── origin_data
│   │   └── val
│   ├── raw_data_56p (Kitti images in 256x832)
│   │   └── val
│   │   │    └── 2011_09_26_drive_0060_sync
│   │   │    │    └── image_02
│   │   │    │    │     └── data
│   │   │    └── 2011_09_26_drive_0084_sync
│   │   │    └── 2011_09_26_drive_0093_sync
│   │   │    └── 2011_09_26_drive_0096_sync
│   ├── non_rigid_mask
│   ├── semantic 
│   ├── small_object_mask

Pretrained Models and Results

The test results without intermediate output and pretrained models are in OneDrive

The test results with intermediate output of each test step are in Google Drive

The details of test setting can be found in link

If you want to test the model from beginning, the precise test step is in link

Evaluation

We use LPIPS for evaluation

Citation

If you use our code or paper, please cite:

@InProceedings{Yue_2020_CVPR,
author = {Yue Wu and Rongrong Gao and Jaesik Park and Qifeng Chen},
title = {Future Video Synthesis with Object Motion Prediction},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Contact

If you have any question, please feel free to contact me (Yue Wu, ywudg@connect.ust.hk)

Acknowledgement

The code is developed based on Vid2Vid https://github.com/NVIDIA/vid2vid

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published