Skip to content

ChiWeiHsiao/DeepVO-pytorch

Repository files navigation

Readme

Usage

  • Download KITTI data and our pretrained model
    • This shell KITTI/downloader.sh can be used to download the KITTI images and pretrained model
      • the shell will only keep the left camera color images (image_03 folder) and delete other data
      • the downloaded images will be placed at KITTI/images/00/, KITTI/images/01, ...
      • the images offered by KITTI is already rectified
      • the direct download link of pretrained model
    • Download the ground truth pose from KITTI Visual Odometry
      • you need to enter your email to request the pose data here
      • and place the ground truth pose at KITTI/pose_GT/
  • Run 'preprocess.py' to
    • remove unused images based on the readme file in KITTI devkit
    • convert the ground truth poses from KITTI (12 floats [R|t]) into 6 floats (euler angle + translation)
    • and save the transformed ground truth pose into .npy file
  • Pretrained weight of FlowNet ( CNN part ) can be downloaded here
    • note that this pretrained FlowNet model assumes that RGB value range is [-0.5, 0.5]
    • the code of CNN layers is modified from ClementPinard/FlowNetPytorch
  • Specify the paths and changes hyperparameters in params.py
    • If your computational resource is limited, please be careful with the following arguments:
    • batch_size: choose batch size depends on your GPU memory
    • img_w, img_h: downsample the images to fit to the GPU memory
    • pin_mem: accelerate the data excahnge between GPU and memory, if your RAM is not large enough, please set to False
  • Run main.py to train the model
    • the trained model and optimizer will be saved in models/
    • the records will be saved in records/
  • Run test.py to output predicted pose
    • output to result/
    • file name will be like out_00.txt
  • Run visualize.py to visualize the prediction of route
  • Other files:
    • model.py: model is defined here
    • data_helper.py: customized PyTorch dataset and sampler
      • the input images is loaded batch by batch

Download trained model

Provided by alexart13.

Required packages

  • pytorch 0.4.0
  • torchvision 0.2.1
  • numpy
  • pandas
  • pillow
  • matplotlib
  • glob

Result

  • Training Sequences
  • Testing Sequence

Acknowledgments

  • Thanks alexart13 for providing the trained model and the correct code to process ground truth rotation.

References

  • paper
    • Sen Wang, Ronald Clark, Hongkai Wen, Niki Trigoni
    • ICRA 2017
      @inproceedings{wang2017deepvo,
      title={Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks},
      author={Wang, Sen and Clark, Ronald and Wen, Hongkai and Trigoni, Niki},
      booktitle={Robotics and Automation (ICRA), 2017 IEEE International Conference on},
      pages={2043--2050},
      year={2017},
      organization={IEEE}
      }
      

About

PyTorch Implementation of DeepVO

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published