Skip to content
Dense Deep Depth Estimation Network (D3-Net) in PyTorch.
Branch: master
Clone or download
Latest commit c281e68 Apr 1, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
dfd_datasets added dfd real dataset from 3drw eccv and code to generate synthetic … Aug 31, 2018
images more explanation on dataset and architecture Aug 31, 2018
matlab had changes Oct 23, 2018
pytorch Update dataset.py Mar 20, 2019
README.md Update README.md Apr 1, 2019

README.md

Deep Depth-from-Defocus (Deep-DFD)

In progress... We are still uploading models and improving the code to easy usage.

Network Architecture

This code implements the Dense Deep Depth Estimation Network (D3-Net) in PyTorch, from the paper:

On regression losses for deep depth estimation, Marcela Carvalho, Bertrand Le Saux, Pauline Trouvé-Peloux, Andrés Almansa, Frédéric Champagnat, ICIP 2018.

Fig.1 - D3-Net architecture.

If you use this work for your projects, please take the time to cite our ICIP paper:

@article{Carvalho2018icip,
  title={On regression losses for deep depth estimation},
  author={Marcela Carvalho and Bertrand {Le Saux} and Pauline Trouv\'{e}-Peloux and Andr\'{e}s Almansa and Fr\'{e}d\'{e}ric Champagnat},
  journal={ICIP},
  year={2018},
  publisher={IEEE}
}

Indoor and outdoor DFD dataset

We also publish the dataset for Deep Depth from Defocus estimation created using a DSLR camera and a Xtion sensor (figure 1). This dataset was presented in in:

Deep Depth from Defocus: how can defocus blur improve 3D estimation using dense neural networks?, Marcela Carvalho, Bertrand Le Saux, Pauline Trouvé-Peloux, Andrés Almansa, Frédéric Champagnat, 3DRW ECCV Workshop 2018.

The dfd_indoor dataset contains 110 images for training and 29 images for testing. The dfd_outdoor dataset contains 34 images for tests; no ground truth was given for this dataset, as the depth sensor only works on indoor scenes.

Fig.2 - Platform to acquire defocused images and corresponding depth maps.

BibTex reference:

Generate Synthetic Defocused Data

In generate_blurred_dataset.m, change lines 14 to 18 to corresponding paths in your computer and run.

If you use this work for your projects, please take the time to cite our ECCV Workshop paper:

@article{Carvalho2018eccv3drw,
  title={Deep Depth from Defocus: how can defocus blur improve {3D} estimation using dense neural networks?},
  author={Marcela Carvalho and Bertrand {Le Saux} and Pauline Trouv\'{e}-Peloux and Andr\'{e}s Almansa and Fr\'{e}d\'{e}ric Champagnat},
  journal={3DRW ECCV Workshop},
  year={2018},
  publisher={IEEE}
}

Requirements

Depth Estimation

Setup

Requires Python 3.6 with pip and the following libraries:

# Pytorch 0.4.0
conda install pytorch torchvision -c pytorch
# Visdom
pip install visdom
# Jupyter Notebook
pip install notebook

Usage

Generate Synthetic Defocused Data

In generate_blurred_dataset.m, change lines 14 to 18 to corresponding paths in your computer and run.

[To be added]

License

Code (scripts and Jupyter notebooks) are released under the GPLv3 license for non-commercial and research purposes only. For commercial purposes, please contact the authors.

You can’t perform that action at this time.