Skip to content


Repository files navigation


Implementation of SCGN in paper "Deep View Synthesis via Self-Consistent Generative Networks" [arxiv].

We propose a novel end-to-end deep generative model, called self-consistent generative network (SCGN), to synthesize novel views from given input views relying on image content only.


  • Python == 3.x
  • OpenCV-Python
  • Tensorflow >= 1.12
  • Platform: Linux


1. Clone

git clone
cd SCGN/

2. Data Preparation

2.1. Multi-PIE

  • Download dataset from here.
  • According to the Content page, we collect data from varied poses (take 05_1 as the target pose) with the highest illumination (i.e., the 7th illumination).
  • Specifically, the original image would be center-cropped and resized into 224x224.

2.2. KITTI

  • Download the odometry data (color) from here.
  • Collect the first 11 sequences from /dataset/sequences/${seq_id}/image_2/, where seq_id is from 00 to 10, and place the sequences in the folder ./datasets/kitti/data. Similarly, the raw image would be center-cropped and resized into 224x224.
  • Execute ./datasets/kitti/ for train-test split and random sampling, then train.csv and test.csv will be generated in ./datasets/kitti/split.
  • Alternatively, we also provide the prepared KITTI dataset. [Google Drive] [Baidu Cloud, pwd: 57gq]

3. Train & Test

  • Download pre-trained models [Multi-PIE, KITTI] and unzip them in the folder ./ckpts for testing.
  • Start training or testing as follows:
# train
./ 0 multipie run-multipie  # for Multi-PIE
./ 0 kitti run-kitti  # for KITTI

# test
./ 0 multipie run-multipie res-multipie  # for Multi-PIE 
./ 0 kitti run-kitti res-kitti  # for KITTI

4. Demo

  • We have provided some samples in ./demo/multipie and ./demo/kitti for inference.
  • Start inference as follows:
# for Multi-PIE
./ 0 multipie run-multipie ./demo/multipie 15_input_l.png 15_input_r.png 15_result
./ 0 multipie run-multipie ./demo/multipie 30_input_l.png 30_input_r.png 30_result
./ 0 multipie run-multipie ./demo/multipie 45_input_l.png 45_input_r.png 45_result

# for KITTI
./ 0 kitti run-kitti ./demo/kitti 09_input_l.png 09_input_r.png 09_result
./ 0 kitti run-kitti ./demo/kitti 10_input_l.png 10_input_r.png 10_result


Please cite the following paper if this repository helps your research:

  title={Deep View Synthesis via Self-Consistent Generative Network},
  author={Liu, Zhuoman and Jia, Wei and Yang, Ming and Luo, Peiyao and Guo, Yong and Tan, Mingkui},
  journal={IEEE Transactions on Multimedia},


Deep View Synthesis via Self-Consistent Generative Networks (TMM 2021)








No releases published


No packages published