Skip to content
Permalink
Browse files

Initial commit

  • Loading branch information
walsvid committed Aug 18, 2019
0 parents commit 532b8f88f95151ee7e607399ee19eb886b2f2f46
Showing with 60,492 additions and 0 deletions.
  1. +6 −0 .gitignore
  2. +29 −0 LICENSE
  3. +141 −0 README.md
  4. +68 −0 cd_distance.py
  5. +22 −0 cfgs/mvp2m.yaml
  6. +27 −0 cfgs/p2mpp.yaml
  7. +52 −0 data/README.md
  8. +3 −0 data/demo/cameras.txt
  9. BIN data/demo/plane1.png
  10. BIN data/demo/plane2.png
  11. BIN data/demo/plane3.png
  12. +7,394 −0 data/demo/predict.obj
  13. +4,928 −0 data/face3.obj
  14. BIN data/figure/coarse.gif
  15. BIN data/figure/final.gif
  16. BIN data/iccv_p2mpp.dat
  17. +8,750 −0 data/test_list.txt
  18. +35,010 −0 data/train_list.txt
  19. +115 −0 demo.py
  20. +29 −0 external/Makefile
  21. +253 −0 external/approxmatch.cpp
  22. +183 −0 external/approxmatch.cu
  23. +329 −0 external/tf_approxmatch.cpp
  24. +112 −0 external/tf_approxmatch.py
  25. +296 −0 external/tf_approxmatch_g.cu
  26. BIN external/tf_approxmatch_g.cu.o
  27. BIN external/tf_approxmatch_so.so
  28. +254 −0 external/tf_nndistance.cpp
  29. +86 −0 external/tf_nndistance.py
  30. +159 −0 external/tf_nndistance_g.cu
  31. BIN external/tf_nndistance_g.cu.o
  32. BIN external/tf_nndistance_so.so
  33. +89 −0 f_score.py
  34. +119 −0 generate_mvp2m_intermediate.py
  35. 0 modules/__init__.py
  36. +84 −0 modules/chamfer.py
  37. +89 −0 modules/config.py
  38. +30 −0 modules/inits.py
  39. +439 −0 modules/layers.py
  40. +131 −0 modules/losses.py
  41. +255 −0 modules/models_mvp2m.py
  42. +214 −0 modules/models_p2mpp.py
  43. 0 results/coarse_mvp2m/.gitkeep
  44. 0 results/refine_p2mpp/.gitkeep
  45. +117 −0 test_mvp2m.py
  46. +119 −0 test_p2mpp.py
  47. +137 −0 train_mvp2m.py
  48. +143 −0 train_p2mpp.py
  49. 0 utils/__init__.py
  50. +94 −0 utils/dataloader.py
  51. +128 −0 utils/tools.py
  52. +38 −0 utils/visualize.py
  53. +20 −0 utils/xyz2obj.py
@@ -0,0 +1,6 @@
.vscode
.idea
logs/*
ckpt/*
**__pycache__
tmp
29 LICENSE
@@ -0,0 +1,29 @@
BSD 3-Clause License

Copyright (c) 2019, Chao Wen, Yinda Zhang, Zhuwen Li, Yanwei Fu
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
141 README.md
@@ -0,0 +1,141 @@
# Pixel2Mesh++

This is an implementation of the ICCV'19 paper "Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation".

Our method takes multi-view images as input and the network outputs a refined 3D mesh model via deformation.

Please check our [paper](https://arxiv.org/abs/1908.01491) and the [project webpage](https://walsvid.github.io/Pixel2MeshPlusPlus) for more details.

If you have any question, please contact Chao Wen (cwen18 at fudan dot edu dot cn).

#### Citation

If you use this code for any purpose, please consider citing:

```
@inProceedings{wang2018pixel2mesh,
title={Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation},
author={Chao Wen and Yinda Zhang and Zhuwen Li and Yanwei Fu},
booktitle={ICCV},
year={2019}
}
```

## Dependencies

Requirements:

- Python3.6
- numpy
- Tensorflow==1.12.0
- tflearn==0.3.2
- opencv-python

Our code has been tested with Python 3.6, TensorFlow 1.12.0, CUDA 9.0 on Ubuntu 16.04.

## Compile CUDA-op

If you use Chamfer Distance for training or evaluation, we have included the cuda implementations of [Fan et. al.](https://github.com/fanhqme/PointSetGeneration) in `external/`.

We recommend readers to follow the official tutorial of Tensorflow for how to compile the CUDA code. Please refer to [official tutorial](https://www.tensorflow.org/guide/extend/op#gpu_support).


## Dataset

We used the [ShapeNet](https://www.shapenet.org/) dataset for 3D models, and rendered views from [3D-R2N2](https://github.com/chrischoy/3D-R2N2). When using the provided data make sure to respect the shapenet [license](https://shapenet.org/terms).

The training/testing split can be found in `data/train_list.txt` and `data/test_list.txt`.

If you are interested in using our data, please check [`./data`](./data) for terms of usage.

## Pre-trained Model
We provide pre-trained models on ShapeNet datasets. Please check [`./data`](./data) for download links.

## Quick Demo

First, please refer to the documentation in [`./data`](./data) to download the pre-trained model.

Then, execute the script below, the input images for demo has placed in `data/demo/` and the final mesh will be output to `data/demo/predict.obj`:

```
python demo.py
```

#### Input images, coarse shape and shape generated by pixel2mesh++

![](data/demo/plane1.png) ![](data/demo/plane2.png) ![](data/demo/plane3.png) ![](data/figure/coarse.gif) ![](data/figure/final.gif)

## Training

Our released code consists of a coarse shape generation and the refined block.

For training, you should first train the coarse shape generation network, then generate intermediate results, and finally train the multi-view deformation network.

#### Step1
For training coarse shape generation, please set your own configuration in `cfgs/mvp2m.yaml`. Specifically, the meaning of the setting items is as follows. For more details, please refer to `modulles/config.py`.

- `train_file_path`: the path of your own train split file which contains training data name for each instance
- `train_image_path`: input image path
- `train_data_path`: ground-truth model path
- `coarse_result_***`: the configuration items related to the coarse intermediate mesh should be same as the training data

Then execute the script:
```
python train_mvp2m.py -f cfgs/mvp2m.yaml
```

#### Step2
Before training multi-view deformation network, you should generated coarse intermediate mesh.

```
python generate_mvp2m_intermediate.py -f cfgs/mvp2m.yaml
```

#### Step3
For training multi-view deformation network, please set your own configuration in `cfgs/p2mpp.yaml`.

The configuration item is similar to Step1. In particular, `train_mesh_root` should be set to the output path of intermediate coarse shape generation.
Then execute the script:

```
python train_p2mpp.py -f cfgs/p2mpp.yaml
```

## Evaluation

First, download the pre-trained model from the link in [`./data`](./data).

Then the model can output predict mesh as follows.

#### Step 1
Generate coarse shape, you also need to set your own configuration in `cfgs/mvp2m.yaml` as mentioned previously, then execute the script:
```
python test_mvp2m.py -f cfgs/mvp2m.yaml
```

#### Step2
You should set `test_mesh_root` in `cfgs/p2mpp.yaml` to the output folder in step1 and `test_image_path`,`test_file_path` as it mentioned in Training step.

Then execute the script:
```
python test_p2mpp.py -f cfgs/p2mpp.yaml
```

For evaluate F-score and Chamfer distance you can execute the script below, and the evaluation result will be output and stored in `result/refine_p2mpp/log`:
```
python f_score.py -f cfgs/p2mpp.yaml
python cd_distance.py -f cfgs/p2mpp.yaml
```

Please check that you config the correct ground truth path, image path and test split file path in yaml config file.

Due to the stochastic nature during training. The released pre-trained model has slightly better F-score 67.23, CD 0.381 compared to F-score 66.48, CD 0.486 in the paper.

## Statement

This software is for research purpose only.
Please contact us for the licence of commercial purposes. All rights are preserved.

## License

BSD 3-Clause License
@@ -0,0 +1,68 @@
# Copyright (C) 2019 Chao Wen, Yinda Zhang, Zhuwen Li, Yanwei Fu
# All rights reserved.
# This code is licensed under BSD 3-Clause License.
import os
import sys
import numpy as np
import pickle as pickle
import tensorflow as tf
import pprint
import glob
import os
from modules.chamfer import nn_distance
from modules.config import execute

if __name__ == '__main__':
print('=> set config')
args = execute()
pprint.pprint(vars(args))
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu_id)
xyz1 = tf.placeholder(tf.float32, shape=(None, 3))
xyz2 = tf.placeholder(tf.float32, shape=(None, 3))
dist1, idx1, dist2, idx2 = nn_distance(xyz1, xyz2)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
sess = tf.Session(config=config)

pred_file_list = os.path.join(args.save_path, args.name, 'predict', str(args.test_epoch), '*_predict.xyz')
xyz_list_path = glob.glob(pred_file_list)

log_dir = os.path.join(args.save_path, args.name, 'logs')
if not os.path.exists(log_dir):
os.makedirs(log_dir)
log_path = os.path.join(log_dir, '{}_cd.log'.format(args.test_epoch))

name = {'02828884': 'bench', '03001627': 'chair', '03636649': 'lamp', '03691459': 'speaker', '04090263': 'firearm',
'04379243': 'table', '04530566': 'watercraft', '02691156': 'plane', '02933112': 'cabinet',
'02958343': 'car', '03211117': 'monitor', '04256520': 'couch', '04401088': 'cellphone'}
length = {'02828884': 0, '03001627': 0, '03636649': 0, '03691459': 0, '04090263': 0, '04379243': 0, '04530566': 0,
'02691156': 0, '02933112': 0, '02958343': 0, '03211117': 0, '04256520': 0, '04401088': 0}
sum_pred = {'02828884': 0, '03001627': 0, '03636649': 0, '03691459': 0, '04090263': 0, '04379243': 0, '04530566': 0,
'02691156': 0, '02933112': 0, '02958343': 0, '03211117': 0, '04256520': 0, '04401088': 0}

index = 0
total_num = len(xyz_list_path)
for pred_path in xyz_list_path:
lab_path = pred_path.replace('_predict', '_ground')
ground = np.loadtxt(lab_path)[:, :3]
predict = np.loadtxt(pred_path)

class_id = pred_path.split('/')[-1].split('_')[0]
length[class_id] += 1.0

d1, i1, d2, i2 = sess.run([dist1, idx1, dist2, idx2], feed_dict={xyz1: predict, xyz2: ground})
cd_distance = np.mean(d1) + np.mean(d2)
sum_pred[class_id] += cd_distance

index += 1
print('processed number', index, total_num)

print(log_path)
log = open(log_path, 'a')
for item in length:
number = length[item] + 1e-6
score = (sum_pred[item] / number) * 10000
print(item, name[item], int(length[item]), score)
print(item, name[item], int(length[item]), score, file=log)
sess.close()
@@ -0,0 +1,22 @@
lr: 1e-5
init_epoch: 0
epochs: 50
test_epoch: 50
restore: false
gpu_id: 0
is_debug: no
feat_dim: 2883
name: 'coarse_mvp2m'
save_path: 'results'
# train
train_file_path: 'data/train_list.txt'
train_data_path: '/home/your_user_name/data/ShapeNetModels/train'
train_image_path: '/home/your_user_name/data/ShapeNetImages/ShapeNetRendering'
# test
test_file_path: 'data/test_list.txt'
test_data_path: '/home/your_user_name/data/ShapeNetModels/test'
test_image_path: '/home/your_user_name/data/ShapeNetImages/ShapeNetRendering'
# coarse result
coarse_result_file_path: 'data/train_list.txt'
coarse_result_data_path: '/home/your_user_name/data/ShapeNetModels/train'
coarse_result_image_path: '/home/your_user_name/data/ShapeNetImages/ShapeNetRendering'
@@ -0,0 +1,27 @@
is_debug: no
# 3 + 3*(512+256+128+64)
feat_dim: 2883
# 3 + 3*(16+32+64)
stage2_feat_dim: 339
name: 'refine_p2mpp'
save_path: 'results'
# about cnn
load_cnn: yes
pre_trained_cnn_path: 'results/coarse_mvp2m/models'
cnn_step: 50
# train
lr: 1e-5
epochs: 10
init_epoch: 50
gpu_id: 0
restore: false
train_file_path: 'data/train_list.txt'
train_data_path: '/home/your_user_name/data/ShapeNetModels/train'
train_image_path: '/home/your_user_name/data/ShapeNetImages/ShapeNetRendering'
train_mesh_root: 'results/coarse_mvp2m/coarse_intermediate/50'
# test
test_epoch: 10
test_file_path: 'data/test_list.txt'
test_data_path: '/home/your_user_name/data/ShapeNetModels/test'
test_image_path: '/home/your_user_name/data/ShapeNetImages/ShapeNetRendering'
test_mesh_root: 'results/coarse_mvp2m/predict/50'
@@ -0,0 +1,52 @@
# Data and Models

## Pre-trained Models

### Download link
Google Drive: [https://drive.google.com/drive/folders/1bLhqXNoBxHh5PTbjoyqMnMtBzHwflL-q?usp=sharing](https://drive.google.com/drive/folders/1bLhqXNoBxHh5PTbjoyqMnMtBzHwflL-q?usp=sharing)

Direct Link: [fudan](fudan)

### Usage
The downloaded pre-training model zip file includes two components of our model: coarse shape generation and multi-view deformation network.

Please extract the model to the `coarse_mvp2m` and `refine_p2mpp` folders respectively according to the corresponding names. The folder structure after unzip should be as follows.

```
results
├── coarse_mvp2m
│   └── models
└── refine_p2mpp
└── models
```

----

## Dataset
We use ShapeNet as our training and testing data.

### Iamges
For input images, we use rendering images from [Choy et. al.](https://github.com/chrischoy/3D-R2N2).

Download image datasets and place them in a folder:
```
mkdir ShapeNetImages
wget http://cvgl.stanford.edu/data2/ShapeNetRendering.tgz
```
Please modify `train/test_image_path` to your 3D model path in the configuration file in `cfg/` before training.

### Ground-truth model
For ground-truth model, we adopt the dataset provided by [Wang et.al.](https://github.com/nywang16/Pixel2Mesh).
Specifically, our pre-process approach is sampling point cloud with vertex normal from origin ShapeNet 3D models.

When using the provided data make sure to respect the shapenet [license](https://shapenet.org/terms).

Download ground-truth models and place them in a folder:
```
mkdir ShapeNetModels
wget fudan
```
We also provided Google Drive [link](https://drive.google.com/drive/folders/1bLhqXNoBxHh5PTbjoyqMnMtBzHwflL-q?usp=sharing) for ground truth models data.

The zip file has already split data into train/test set. Please modify `train/test_data_path` to your 3D model path in the configuration file in `cfg/` before training.

@@ -0,0 +1,3 @@
250.481028534 29.4597137908 0 0.652313097662 25
8.36659250622 26.4750846762 0 0.713299532982 25
59.4904457837 28.9306513916 0 0.805797933096 25
BIN +4.58 KB data/demo/plane1.png
Binary file not shown.
BIN +3.55 KB data/demo/plane2.png
Binary file not shown.
BIN +3.65 KB data/demo/plane3.png
Binary file not shown.

0 comments on commit 532b8f8

Please sign in to comment.
You can’t perform that action at this time.