This is the official implementation code of our 3DV 2022 paper:
HoW-3D: Holistic 3D Wireframe Perception from a Single Image. [paper] [video (5min)] [video (10min)]
Clone the repository:
git clone https://github.com/Wenchao-M/HoW-3D.git
Install dependencies:
conda create -n how python=3.6
conda activate how
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
pip install -r requirement.txt
python setup.py build_ext --inplace
Please download our ABC-HoW dataset here and place it under the 'data' folder.
Download the pretrained model of HRNet and place it under the 'ckpts/' folder.
We train our model in a four stage manner, run the following comman to train.
bash train.sh
You can download the pretrained model here and place it under the 'ckpts/' folder. (Optional)
Change the 'resume_dir' in 'config_DSG_eval.yaml' to the path where you save the weight file.
Change the 'root_dir' in config files to the path where you save the data.
Run the following command to evaluate the performance:
CUDA_VISIBLE_DEVICES=0 python eval.py --cfg_path configs/config_DSG_eval.yaml
The results would be saved in the json file: 'results/result.json'
We provide the evaluation code to evaluate our DSG model. Please do it after running the testing code.
Evaluate the 3D junction AP and line sAP:
python evaluation/sap3D_junctions.py --path results/results.json
python evaluation/sap3D_lines.py --path results/results.json
Install the open3d-python:
pip install open3d-python
Visualize ground truth:
python visualization/vis_gt.py
Visualize results of our DSG model:
python visualization/vis_dsg.py
If you find our work useful in your research, please consider citing:
@inproceedings{ma2022HoW3D,
title={HoW-3D: Holistic 3D Wireframe Perception from a Single Image},
author={Wenchao Ma and
Bin Tan and
Nan Xue and
Tianfu Wu and
Xianwei Zheng and
Gui-Song Xia},
booktitle = {International Conference on 3D Vision},
year={2022}
}