DOPE-Uncertainty
It is the code base for ensemble-based uncertainty quantification in deep object pose estimation. For more details, see our project website, ICRA 2021 paper, and video.
Dependencies
- The
add
(average distance) metric is computed byvisii
, sovisii
needs to be installed here. - Downlad neural network weights (in
.pth
) and save them to thecontent
folder to replace proxy files. Note that we only provide the weights and models (already incontent/models/grocery
) for theCorn
object for now.
Running Examples
We provide some demo images in uncertainty_quantification/output/test
for code test and demonstrations. These demo images are generated by the NVISII render. There are two example scripts:
uncertainty_quantification/run.py
requires the ground truth poses for statistics. This script would first do pose estimation based on DOPE (but you do not need to install DOPE or ROS), and then do post-inference uncertainty quantification. The expected result is that this script would generate all files inuncertainty_quantification/output/test_result
, including inference results, confidence plot, the most confident frame selection, uncertainty quantification correlation analysis, etc.uncertainty_quantification/run_realworld.py
is similar, but do not need the ground truth poses. The expected result is that this script would generate all files inuncertainty_quantification/output/test_result_realworld
. This script corresponds to the real-world grasping experiment in our paper, where there is no ground truth pose estimation.