Skip to content
ECCV 2018 "T2Net: Synthetic-to-Realistic Translation for Depth Estimation Tasks"
Branch: master
Clone or download
Latest commit f06db47 Dec 21, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Image
dataloader final version used for indoor scene. (transform has two downsampling … Sep 17, 2018
datasplit
model update the pretrained model for outdoor KITTI dataset with multi-GPU Dec 19, 2018
options add the depth estimation file Oct 31, 2018
util add the depth estimation file Oct 31, 2018
README.md Update README.md Dec 21, 2018
test.py the released synthetic2releastic code for ECCV18 Aug 4, 2018
train.py the released synthetic2releastic code for ECCV18 Aug 4, 2018

README.md

Synthetic2Realistic

This repository implements the training and testing of T2Net for "T2Net: Synthetic-to-Realistic Translation for Depth Estimation Tasks" by Chuanxia Zheng, Tat-Jen Cham and Jianfei Cai at NTU. A video is available on YouTube. The repository offers the implementation of the paper in Pytoch.

  • Outdoor Translation

  • Indoor Translation

  • Extension (WS-GAN, unpaired Image-to-Image Translation, horse2zebra)

This repository can be used for training and testing of

  • Unpaired image-to-image Translation
  • Single depth Estimation

Getting Started

Installation

This code was tested with Pytoch 0.4.0, CUDA 8.0, Python 3.6 and Ubuntu 16.04

pip install visdom dominate
  • Clone this repo:
git clone https://github.com/lyndonzheng/Synthetic2Realistic
cd Synthetic2Realistic

Datasets

The indoor Synthetic Dataset renders from SUNCG and indoor Realistic Dataset comes from NYUv2. The outdooe Synthetic Dataset is vKITTI and outdoor Realistic dataset is KITTI

Training

Warning: The input sizes need to be muliples of 64. The feature GAN model needs to be change for different scale

  • Train a model with multi-domain datasets:
python train.py --name Outdoor_nyu_wsupervised --model wsupervised
--img_source_file /dataset/Image2Depth31_KITTI/trainA_SYN.txt
--img_target_file /dataset/Image2Depth31_KITTI/trainA.txt
--lab_source_file /dataset/Image2Depth31_KITTI/trainB_SYN.txt
--lab_target_file /dataset/Image2Depth31_KITTI/trainB.txt
--shuffle --flip --rotation
  • To view training results and loss plots, run python -m visdom.server and copy the URL http://localhost:8097.
  • Training results will be saved under the checkpoints folder. The more training options can be found in options.

Testing

  • Test the model
python test.py --name Outdoor_nyu_wsupervised --model test
--img_source_file /dataset/Image2Depth31_KITTI/testA_SYN80
--img_target_file /dataset/Image2Depth31_KITTI/testA

Estimation

  • Depth Estimation, the code based on monodepth
python evaluation.py --split eigen --file_path ./datasplit/
--gt_path ''your path''/KITTI/raw_data_KITTI/
--predicted_depth_path ''your path''/result/KITTI/predicted_depth_vk
--garg_crop

Trained Models

The pretrained model for indoor scene weakly wsupervised.

The pretrained model for outdoor scene weakly wsupervised

Note: Since our orginal model in the paper trained on single-GPU, this pretrained model is for multi-GPU version.

Citation

If you use this code for your research, please cite our papers.

@inproceedings{zheng2018t2net,
  title={T2Net: Synthetic-to-Realistic Translation for Solving Single-Image Depth Estimation Tasks},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={767--783},
  year={2018}
}

Acknowledgments

Code is inspired by Pytorch-CycleGAN

You can’t perform that action at this time.