Implementation for ODAM: Object Detection, Association, and Mapping using Posed RGB Video
- install the packages using the following command:
conda env create -f environment.yml
-
Copy ScanNet data to ./data
-
Download the pretrained model from here and place them in ./experiments/. Also place the scannet_img file to ./data/ScanNet/
run the full pipeline using the following command at the root dir:
export PYTHONPATH=$PYTHONPATH:$PWD
python src/scripts/run_processor.py --config_path ./configs/detr_scan_net.yaml --no_code --use_prior --out_dir ./result/e2e --representation super_quadric
Note: The comparison to Vid2CAD reported in the paper did not report the best performance of Vid2CAD due to some inconsistencies in the representation. See the updated Vid2CAD for the latest results.