Skip to content
Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation, CVPR 2019
Python
Branch: master
Clone or download
Latest commit 2835031 Jun 30, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data models coming soon May 25, 2019
datasets models coming soon May 25, 2019
img models coming soon May 25, 2019
models Update networks.py May 31, 2019
options models coming soon May 25, 2019
utils models coming soon May 25, 2019
README.md Update README.md Jun 30, 2019
train.py models coming soon May 25, 2019

README.md

GASDA

This is the PyTorch implementation for our CVPR'19 paper:

S. Zhao, H. Fu, M. Gong and D. Tao. Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation. PAPER POSTER

Framework

Environment

  1. Python 3.6
  2. PyTorch 0.4.1
  3. CUDA 9.0
  4. Ubuntu 16.04

Datasets

KITTI

vKITTI

Prepare the two datasets according to the datalists (*.txt in datasets)

Training (Tesla V100, 16GB)

python train.py --model ft --gpu_ids 0 --batchSize 8 --loadSize 256 1024 --g_tgt_premodel ./cyclegan/G_Tgt.pth
  • Train F_s
python train.py --model fs --gpu_ids 0 --batchSize 8 --loadSize 256 1024 --g_src_premodel ./cyclegan/G_Src.pth
  • Train GASDA using the pretrained F_s, F_t and CycleGAN.
python train.py --freeze_bn --freeze_in --model gasda --gpu_ids 0 --batchSize 3 --loadSize 192 640 --g_src_premodel ./cyclegan/G_Src.pth --g_tgt_premodel ./cyclegan/G_Tgt.pth --d_src_premodel ./cyclegan/D_Src.pth --d_tgt_premodel ./cyclegan/D_Tgt.pth --t_depth_premodel ./checkpoints/vkitti2kitti_ft_bn/**_net_G_Depth_T.pth --s_depth_premodel ./checkpoints/vkitti2kitti_fs_bn/**_net_G_Depth_S.pth 

Note: this training strategy is different from that in our paper.

Test

MODELS.

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zhao2019geometry,
  title={Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation},
  author={Zhao, Shanshan and Fu, Huan and Gong, Mingming and Tao, Dacheng},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={9788--9798},
  year={2019}
}

Acknowledgments

Code is inspired by T^2Net and CycleGAN.

Contact

Shanshan Zhao: szha4333@uni.sydney.edu.au or sshan.zhao00@gmail.com

You can’t perform that action at this time.