Skip to content


Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Data-driven Structural Priors for Shape Completion

Minhyuk Sung, Vladimir G. Kim, Roland Angst, and Leonidas Guibas
Siggraph Asia 2015



  author = {Sung, Minhyuk and Kim, Vladimir G. and Angst, Roland and Guibas, Leonidas},
  title = {Data-driven Structural Priors for Shape Completion},
  Journal = {ACM Transactions on Graphics (Proc. of SIGGRAPH Asia)}, 
  year = {2015}



See 'lib/' for required libraries.

Prepare a new dataset

  1. Prepare mesh files and label files (*.off, *.gt)
    Make sure that all mesh files have UNIT LENGTH from the bounding box center to the farthest point.

  2. Create a mesh and label file directory for the new dataset
    This directory should have both off and gt directories, which are mesh(.off) and label(.gt) file directories, respectively.

  3. Create a information file directory for the new dataset
    This directory should have following files:

    • regions.txt: This file is for both shape2pose code and cuboid-prediction code.
      Each line shows part (part_name) pnts 1
      The first line part corresponds to label number 0, the next line corresponds to label number 1, and so on.

    • regions_symmetry.txt: This file is for both shape2pose code and cuboid-prediction code.
      Each line shows a set of symmetric parts (part_name_1) (part_name_2) ... (part_name_k)
      All part names should appear in regions.txt file.
      If a part has no symmetric parts, it should also be written in a line without any other part name.

    • symmetry_groups.txt: This file is only for cuboid-prediction code.
      In contrast to regions_symmetry.txt file which has symmetric part information for learning local classifiers,
      symmetry_groups.txt file has information of symmetric parts in terms of the part structure.
      For example, all legs of chairs are considered as symmetric each other when training local classifiers, but the front legs and rear legs are considered as separate symmetric groups in the part structure.

      Each symmetry group should be recorded in the following format:

      symmetry_group (rotation/reflection) (axis_index:[0,1,2])
      single_label_indices (label_number_0 label_number_1 ... label_number_k)
      pair_label_indices (label_number_pair_a_0 label_number_pair_b_0 ... label_number_pair_a_k label_number_pair_b_k)

      A symmetry group can be either rotation group or reflection group.
      Axis index means part local axis index ([x, y, z] → [0, 1, 2]) of reflection plane normal or rotation axis.
      Single label indices indicate single parts which are symmetric in terms of the symmetry axis.
      Pair label indices indicate pair of parts which are symmetric in terms of the symmetry axis.
      Each pair of successive label numbers label_number_pair_a_i label_number_pair_b_i shows a (i-th) pair.
      A rotation symmetry group MUST NOT has pair label indices (currently not supported),
      and also a reflection symmetry group MAY NOT have pair label indices (optional).


      symmetry_group reflection 0
      single_label_indices 0
      pair_label_indices 1 2 3 4

      Let [+, +, +] indicates that a cuboid corner of part, which has position
      (center_x + 0.5 * size_x, center_y + 0.5 * size_y, center_z + 0.5 * size_z)
      The [-, +, +] corner of part with label 1 is symmetric with the [+, +, +] corner of part with label 2.

Train/test local point classifiers

Make sure that all files are prepared as mentioned above.

    Copy regions.txt and regions_symmetry.txt files in ($shape2pose)/data/0_body/($dataset_name) to ($shape2pose)/data/0_body.
    Double check these files.

  2. Make a ($dataset_name)_all.txt file in ($shape2pose)/script/examples.
    This file should have all mesh file names (without extension).
    If you train a subset of files, make the name list file with the subset.

    If you do cross-validation, copy and files to and in ($shape2pose)/script/scriptlibs, respectively.
    If you run for the subset of meshes and don't do cross-validation, copy and files to and in ($shape2pose)/script/scriptlibs, respectively.

  4. For training, run the following command in ($shape2pose)/script:
    ./ ($dataset_name) exp1_($dataset_name) examples/($dataset_name)_all.txt

    The classifier files (train_(part_name).arff, weka_(part_name).model) are generated in
    If you do cross-validation, the classifier files are generated in each mesh name directory.
    Make sure that all mesh name directories have the same number of files (classifier files for all parts).

  5. For testing, run the following command in ($shape2pose)/script:
    ./ ($dataset_name) exp1_($dataset_name) examples/($dataset_name)_all.txt

    The prediction files are generated in

Run experiments

  1. Compile code
    In ../../build/OSMesaViewer/build, make.

  2. Make an experiment directory
    This directory should have following files:

    • arguments.txt: The following is the example of arguments.


    Make sure that data_root_path is set correctly.

    • pose.txt: Camera pose file for rendering.
  3. Run experiments
    In ($cuboid-prediction)/python, run the following command:
    ./ ($exp_type) ($shape2pose)/data/1_input/($dataset_name)/off/ ../experiments/($dataset_name)/

    Run the command in the following ($exp_type) order:

    1. ground_truth_cuboids: Create ground truth cuboids in ../experiments/($dataset_name)/training
      After this, run the following command in ../experiments/($dataset_name) for generating part relation statistics files:
      ../../build/OSMesaViewer/build/Build/bin/OSMesaViewer --run_training --flagfile=arguments.txt

    2. prediction: Run our method. Files are generated in ../experiments/($dataset_name)/output

    3. part_assembly: Run part assembly. Files are generated in ../experiments/($dataset_name)/part_assembly

    4. symmetry_detection: Run symmetry detection. Files are generated in ../experiments/($dataset_name)/symmetry_detection
      BEFORE this, run the following command in ($cuboid-prediction)/python:
      ./ ($shape2pose)/data/1_input/($dataset_name)/off/ ../experiments/($dataset_name)/
      Make sure that binDir variable in ./ is correctly set.

    5. baseline: Compute baseline. Files are generated in ../experiments/($dataset_name)/baseline

    6. render_assembly: Render part assembly cuboids. Should be executed after part assembly.

    7. render_evaluation [optional]: Re-render all experimental result images (including part assembly and symmetry detection). Used when rendering with new parameters.

    8. extract_symmetry_info [optional]: Used when extracting symmetry axes information of our method results.

  4. Generate HTML result pages
    In ($cuboid-prediction)/python, run the following command:
    ./ ../experiments/($dataset_name)/

The resulting files are created in the following directories:

For generating paper figures, run the following command in `($cuboid-prediction)/python/figures`:
`./ fig_N.txt`
Select examples and record in `fig_N.txt` files.
Tex files and relates image files are generated in `($cuboid-prediction)/report` and `($cuboid-prediction)/report/images`.


The followings are remarkable parameters (can be set by adding in the arguments.txt file):

  • occlusion_pose_filename:
    [IMPORTANT] If this is set to "", random occlusion pose is generated based on random_view_seed.
  • random_view_seed:
    Seed number of random occlusion view generation.
  • param_min_num_symmetric_point_pairs:
    If the number of symmetric point pairs is less than this value, the symmetric point pairs are not considered in optimization.
  • param_min_sample_point_confidence:
    In the initial step, only points with confidence greater than this value are clustered. A lower value can be better when there are noise in the input points.
  • param_sparse_neighbor_distance:
    Point neighbor distance (in most cases).
  • param_cuboid_split_neighbor_distance:
    Point neighbor distance for splitting initial cuboids.
  • param_occlusion_test_neighbor_distance:
    Point neighbor distance for occlusion test.
  • param_fusion_grid_size:
    Voxel size for fusion.
  • param_min_cuboid_overall_visibility:
    If the overall visibility of the missing cuboid is greater than this value, it is considered as created in the visible area, and ignored.
  • param_fusion_visibility_smoothing_prior:
    MRF smoothing prior value for fusion.
  • param_eval_min_neighbor_distance:
    Minimum error value for accuracy/completeness rendering.
    Run render_evaluation for rendering with new parameter values.
  • param_eval_max_neighbor_distance:
    Maximum error value for accuracy/completeness rendering.
    Run render_evaluation for rendering with new parameter values.
  • use_view_plane_mask:
    [IMPORTANT] Set true if one uses view plane 2D occlusion mask.
  • param_view_plane_mask_proportion:
    The view plane 2D occlusion mask is created so that this proportion of points are occluded more AFTER self-occlusion.

Experiment result files

($shape2pose)/data/0_body/, ($shape2pose)/data/1_input/

coseg_chairs_all_N_train.txt, coseg_chairs_all_N_test.txt
('N' is proportion of training mesh files).

($shape2pose)/data/3_trained/classifier, ($shape2pose)/data/4_experiments


Results with view plane mask with 30% proportion 2D occlusion: