PyTorch implementation of the DeepIM framework.
We propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using an untangled representation of 3D location and 3D orientation and an iterative training process. pdf, Project
DeepIM is released under the MIT License (refer to the LICENSE file for details).
If you find DeepIM useful in your research, please consider citing:
@inproceedings{li2017deepim,
Author = {Yi Li and Gu Wang and Xiangyang Ji and Yu Xiang and Dieter Fox},
Title = {DeepIM: Deep Iterative Matching for 6D Pose Estimation},
booktitle = {European Conference Computer Vision (ECCV)},
Year = {2018}
}
- Build conda environment
conda env create -f environment.yml
conda activate deepim
- install cupy suitable for your cuda version
pip install cupy-cu101
- Build YCB_Renderer
cd ycb_render
sudo apt-get install libassimp-dev
pip install -r requirement.txt
python setup.py develop
- Ubuntu 16.04
- PyTorch 1.7.1
- CUDA 10.1
Download the model here and extract it to DeepIM-Honda/output
The following code will estimate results in DeepIM-Honda/data/real_camera_A
and store visualization results in vis
sh test_auto.sh