Skip to content
Monocular disparity (inverse depth) estimation network
Lua
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data
DataReader.lua
LICENSE.md Update readme and license Aug 22, 2017
README.md Update readme Aug 22, 2017
SpatialSmoothTerm.lua Initial commit Aug 22, 2017
bms_criterion.lua Initial commit Aug 22, 2017
bss_criterion.lua Initial commit Aug 22, 2017
createModelBasicMultiscale.lua Initial commit Aug 22, 2017
createModelBasicSinglescale.lua Initial commit Aug 22, 2017
createModelMultiscaleTest.lua Initial commit Aug 22, 2017
createModelSiameseMultiscale.lua Initial commit Aug 22, 2017
createModelSiameseSinglescale.lua Initial commit Aug 22, 2017
createModelSinglescaleTest.lua Initial commit Aug 22, 2017
gModuleShare.lua Initial commit Aug 22, 2017
opts.lua Initial commit Aug 22, 2017
run_inference.lua Initial commit Aug 22, 2017
run_train.lua
sms_criterion.lua Initial commit Aug 22, 2017
sss_criterion.lua Initial commit Aug 22, 2017
weight_init.lua Initial commit Aug 22, 2017

README.md

Disparity estimation network

CNN-based monocular disparity (inverse depth) estimation network for surgical videos collected in da Vinci surgery. The source code and data are associated with a short report presented at the Hamlyn Symposium on Medical Robotics 2017.

If you use the code or data, please cite following:

Ye, M., Johns, E., Handa, A., Zhang, L., Pratt, P. and Yang, G.Z. 
Self-Supervised Siamese Learning on Stereo Image Pairs for Depth 
Estimation in Robotic Surgery. Hamlyn Symposium on Medical Robotics. 2017.

You can download our data (9.3GB) and pretrained models and place them in the "data" and "trained" folders, respectively.

Prerequisites

Torch

Torch-autograd

gvnn

Torch-colormap (for visualisation only)

License

This code is distributed under BSD License.

Notes

  1. The autoencoder model in this implementation is slightly different from the one in the report. Certain layers have been removed for memory consideration and skip layers and multiscale training have been added.

  2. Please adjust the mini-batch size according to your specific GPU memory.

  3. This implementation has been tested in Ubuntu.

  4. Please see run_train.lua and run_inference.lua for example usage.

You can’t perform that action at this time.