Skip to content

PyTorch version of the paper 'Learning an Efficient Multimodal Depth Completion Model' (ECCVW 2022)

License

Notifications You must be signed in to change notification settings

dwHou/EMDC-PyTorch

Repository files navigation

forthebadge

Winning Solution in MIPI2022 Challenges on RGB+ToF Depth Completion

Requirements

Required

  • pytorch
  • numpy
  • pillow
  • opencv-python-headless
  • scipy
  • Matplotlib
  • torch_ema

Optional

  • tqdm
  • tensorboardX

Pre-trained models

Download the pretrained models from Google Drive

Quickstart

Training

  1. Step 1: download training data and fixed validation data from Google Drive and unzip them.

  2. Step 2:

    • Train set: Record the path of the data pairs to a text file like this and assign the file location to the variable 'train_txt' in ./utils/dataset.py. Also, modify the data directory path in the member function 'self._load_png'.
    • Val set: Processing is similar to the above.
    • Note that 'BeachApartmentInterior_My_ir' scene's folder is removed from the training set, as it is partitioned into the fixed validation set.
  3. Step 3:

    bash train.sh

Test

  1. Step1:

    download the official test data and put it in ./Submit

    download the pretrained model and put it in ./checkpoints

  2. Step2:

    cd ./Submit
    cp ../utils/define_model.py ./
    cp -R ../models ./
    bash test.sh 
  3. Step 3: Check the results under the path ./Submit/results

Citation

If you find our codes useful for your research, please consider citing our paper: (TBD)

[1] Dewang Hou, Yuanyuan Du, Kai Zhao, and Yang Zhao, "Learning an Efficient Multimodal Depth Completion Model", 1st MIPI: Mobile Intelligent Photography & Imaging workshop and challenge on RGB+ToF depth completion in conjunction with ECCV 2022. [PDF] [arXiv]

@inproceedings{hou2023learning,
  title={Learning an Efficient Multimodal Depth Completion Model},
  author={Hou, Dewang and Du, Yuanyuan and Zhao, Kai and Zhao, Yang},
  booktitle={Computer Vision--ECCV 2022 Workshops: Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part V},
  pages={161--174},
  year={2023},
  organization={Springer}
}

About

PyTorch version of the paper 'Learning an Efficient Multimodal Depth Completion Model' (ECCVW 2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published