Skip to content

Code for SIGGRAPH 2022 paper: ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation

License

Notifications You must be signed in to change notification settings

guohaoxiang/ComplexGen

Repository files navigation

ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation

This is the official implementation of the following paper:

Guo H X, Liu S L, Pan H, Liu Y, Tong X, Guo B N. ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation. SIGGRAPH 2022

Paper | Project Page

Abstract: We view the reconstruction of CAD models in the boundary representation (B-Rep) as the detection of geometric primitives of different orders, i.e., vertices, edges and surface patches, and the correspondence of primitives, which are holistically modeled as a chain complex, and show that by modeling such comprehensive structures more complete and regularized reconstructions can be achieved. We solve the complex generation problem in two steps. First, we propose a novel neural framework that consists of a sparse CNN encoder for input point cloud processing and a tri-path transformer decoder for generating geometric primitives and their mutual relationships with estimated probabilities. Second, given the probabilistic structure predicted by the neural network, we recover a definite B-Rep chain complex by solving a global optimization maximizing the likelihood under structural validness constraints and applying geometric refinements. Extensive tests on large scale CAD datasets demonstrate that the modeling of B-Rep chain complex structure enables more accurate detection for learning and more constrained reconstruction for optimization, leading to structurally more faithful and complete CAD B-Rep models than previous results.

The pipeline contains 3 main phases, we will show how to run the code for each phase, and provide the corresponding checkpoint/data.

Data downloading

We provide the pre-processed ABC dataset used for training and evaluating ComplexNet, you can download it from BaiduYun or OneDrive, which can be extracted by 7-Zip. You can find the details of pre-processing pipelines in the supplemental material of our paper.

The data contains surface points along with normals, and its ground truth B-Rep labels. After extracting the zip file under root directory, the data should be organized as the following structure:

ComplexGen
│
└─── data
    │
    └─── default
    │   │
    |   └─── train
    │   │
    |   └─── val
    │   │
    |   └─── test
    |   |   
    |   └─── test_point_clouds
    |        
    └─── partial
        │
        └─── ...

[Optional] You can also find the output of each phase from BaiduYun or OneDrive. For each test model, there will be 4 or 5 outputs:

*_input.ply: Input point cloud
*_prediction.pkl: Output of 'ComplexNet prediction' phase
*_prediction.complex: Visualizable file for *_prediction.pkl, elements with valid probability larger than 0.3 are kept.
*_extraction.complex: Output of 'complex extraction' phase
*_geom_refine.json: Output of 'geometric refinement' phase, which is also the final output.

The description and visualization of each file type can be found in pickle description, complex description and json description. If you want to directly evaluate the provided output data of ComplexGen, please put the extracted experiments folder under root folder ComplexGen, and conduct Environment setup and Evaluation

Phase 1: ComplexNet prediction

Environment setup with Docker

    $ docker pull pytorch/pytorch:1.6.0-cuda10.1-cudnn7-devel
    $ docker run --runtime=nvidia --ipc=host --net=host -v /path/to/complexgen/:/workspace -t -i pytorch/pytorch:1.6.0-cuda10.1-cudnn7-devel
    $ cd /workspace
    $ apt-get update && apt-get install libopenblas-dev -y && conda install numpy mkl-include pytorch cudatoolkit=10.1 -c pytorch -y && apt-get install git -y && pip install git+https://github.com/NVIDIA/MinkowskiEngine.git@v0.5.0 --user
    $ cd chamferdist && python setup.py install --user && pip install numba --user && pip install methodtools --user && pip install tensorflow-gpu --user && pip install scipy --user  && pip install rtree --user && pip install plyfile --user && pip install trimesh --user && cd ..

[Note]: If the 'apt-get update' gets error. To solve this problem, you can firstly run command 'rm /etc/apt/sources.list.d/cuda.list' (details shown in NVIDIA/nvidia-docker#619).

To test if the environment is set correctly, run:

    $ ./scripts/train_small.sh

This command will start the training of ComplexNet on a small dataset with 64 CAD models.

Testing

To test the trained ComplexNet, please first download the trained weights used in our paper from BaiduYun or OneDrive, and unzip it under the root directory:

ComplexGen
│
└─── experiments
    │
    └─── default
    │   │
    |   └─── ckpt
    │       │
    |       └─── *.pth
    └─── ...

Then run:

    $ ./scripts/test_default.sh

You can find network prediction of each model (*.pkl) under ComplexGen/experiments/default/test_obj/. The description of each pickle file (*.pkl) can be found here.

You can also get the visualizable models of corner/curve/patch of some test data by running:

    $ ./scripts/test_default_vis.sh

A set of 3D models will be generated under ComplexGen/experiments/default/vis_test/ which can be visualized using 3D softwares like MeshLab.

Training

If you want to train ComplexNet from scratch, run:

    $ ./scripts/train_default.sh

By default, ComplexNet is trained on a server with 8 V100 GPUs. You can change the numder of GPUs by setting the --gpu flag in ./scripts/train_default.sh, and change batch size by setting the batch_size flag. The training takes about 3 days to converge.

Phase 2: Complex extraction

Environment setup

    $ pip install gurobipy==9.1.2 && pip install Mosek && pip install sklearn

Note that you need also mannully setup Gurobi license.

To conduct complex extraction, run:

    $ ./scripts/extraction_default.sh

A set of complex file will be generated under ComplexGen/experiments/default/test_obj/. The description and visualization of complex file can be found here. As the average extraction time for each model is 10 minutes, we recommend you to conduct complex extraction on a multi-thread cpu server. To do this, just set flag_parallel as True and num_parallel as half of the number of available threads in ComplexGen/PostProcess/complex_extraction.py.

Phase 3: Geometric refinement

Code of this phase can be compiled only under Windows. If you want to build it under Linux, please follow the Chinese instructions here or here.

Environment setup

libigl and Eigen are needed, you can install them via vcpkg

    $ vcpkg.exe integrate install
    $ vcpkg.exe install libigl
    $ vcpkg.exe install eigen3

Compile and build

The C++ project can be generated with CMake:

    $ cd PATH_TO_COMPLEXGEN\GeometricRefine
    $ mkdir build
    $ cd build
    $ cmake ..

Then you can build GeometricRefine.sln with Visual Studio. After that, you'll find GeometricRefine.exe under PATH_TO_COMPLEXGEN/GeometricRefine/Bin.

To conduct geometric refinement for all models, please first modify .\scripts\geometric_refine.py by setting 'pc_ply_path' as the path containing the input point cloud stored in .ply format, and setting 'complex_path' as the path containing the results of complex extraction, then run:

    $ cd PATH_TO_COMPLEXGEN
    $ python .\scripts\geometric_refine.py

If you are processing noisy/partial data, please replace the second command with:

    $ python .\scripts\geometric_refine.py --noise

You will find the generate json files under 'complex_path'. Description of the generated json file can be found here

Evaluation

The evaluation is conducted under Linux. To evaluate the final output of ComplexGen, run:

    $ ./scripts/eval_default.sh

You can find the metrics of each model and all models in ComplexGen/experiments/default/test_obj/final_evaluation_geom_refine.xlsx.

Visualization

We provide tools for converting our generated complex file or json file to obj files which can be visualized with MeshLab:

    $ cd vis
    $ python gen_vis_result.py -i PATH_TO_COMPLEX/JSON_FILE

Remember to copy ./vis/complexgen.mtl to the target folder containing the complex/json file. Corners of the reconstructed B-Rep are shown in yellow, curves in blue and patches in different colors.

Citation

If you use our code for research, please cite our paper:

@article{GuoComplexGen2022,
    author = {Haoxiang Guo and Shilin Liu and Hao Pan and Yang Liu and Xin Tong and Baining Guo},
    title = {ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation},
    year = {2022},
    issue_date = {July 2022},
    publisher = {Association for Computing Machinery},
    volume = {41},
    number = {4},
    url = {https://doi.org/10.1145/3528223.3530078},
    doi = {10.1145/3528223.3530078},
    journal = {ACM Trans. Graph. (SIGGRAPH)},
    month = jul,
    articleno = {129},
    numpages = {18}
}

License

MIT Licence

Contact

Please contact us (Haoxiang Guo guohaoxiangxiang@gmail.com) if you have any question about our implementation.

Acknowledgement

This implementation takes DETR and Geometric Tools as references. We thank the authors for their excellent work.

About

Code for SIGGRAPH 2022 paper: ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published