Skip to content

This repository contains the source codes for the paper "AtlasNet: A Papier-Mâché Approach to Learning Mesh Synthesis". The network is able to synthesize a mesh (point cloud + connectivity) from a low-resolution point cloud, or from an image.

License

Notifications You must be signed in to change notification settings

Duotun/AtlasNet

 
 

Repository files navigation

AtlasNet

This repository contains the source codes for the paper AtlasNet: A Papier-Mâché Approach to Learning Mesh Synthesis. The network is able to synthesize a mesh (point cloud + connectivity) from a low-resolution point cloud, or from an image. teaset

result

Citing this work

If you find this work useful in your research, please consider citing:

@inproceedings{groueix2018,
          title={{AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation}},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
          year={2018}
        }

Project Page

The project page is available http://imagine.enpc.fr/~groueixt/atlasnet/

Install

Clone the repo

## Download the repository
git clone git@github.com:ThibaultGROUEIX/AtlasNet.git
## Create python env with relevant packages
conda create --name pytorch-atlasnet --file aux/spec-file.txt
source activate pytorch-atlasnet
pip install pandas visdom tqdm
conda install pytorch=0.1.12 cuda80 -c soumith #Update cuda80 to cuda90 if relevant
conda install torchvision

This implementation uses Pytorch. Please note that the Chamfer Distance code doesn't work on all versions of pytorch because of some weird error with the batch norm layers. It has been tested on v1.12, v3 and the latest sources available to date.

Pytorch compatibility

Python/Pytorch v1.12 v2 v3.1 0.4.0a0+ea02833 0.4.x latest
2.7 ✔️ 👍 😃 🚫 👎 😞 🚫 👎 😞 ✔️ 👍 😃 🚫 👎 😞
3.6 ✔️👍 😃 ? ? 🚫 👎 😞 🚫 👎 😞

Recommended : Python 2.7, Pytorch 1.12

If you need v4 :

git clone --recursive https://github.com/pytorch/pytorch
cd pytorch ; git reset --hard ea02833 #Go to this specific commit that works fine for the chamfer distance
# Then follow pytorch install instruction as usual

Developped in python 2.7, so might need a few adjustements for python 3.6. I only tested "train_AE_AtlasNet.py" in python 3.6.

Build chamfer distance

cd AtlasNet/nndistance/src
nvcc -c -o nnd_cuda.cu.o nnd_cuda.cu -x cu -Xcompiler -fPIC -arch=sm_52
cd ..
python build.py
python test.py

Data and Trained models

We used the ShapeNet dataset for 3D models, and rendered views from 3D-R2N2:

When using the provided data make sure to respect the shapenet license.

The trained models and some corresponding results are also available online :

Demo

Require 3GB RAM on the GPU and 5sec to run. Pass --cuda 0 to run without gpu (9sec).

python inference/demo.py --cuda 1

input

This script takes as input a 137 * 137 image (from ShapeNet), run it through a trained resnet encoder, then decode it through a trained atlasnet with 25 learned parameterizations, and save the output to output.ply

Train

  • First launch a visdom server :
python -m visdom.server -p 8888
  • Launch the training. Check out all the options in ./training/train_AE_AtlasNet.py .
export CUDA_VISIBLE_DEVICES=0 #whichever you want
source activate pytorch-atlasnet
git pull
env=AE_AtlasNet
nb_primitives=25
python ./training/train_AE_AtlasNet.py --env $env --nb_primitives $nb_primitives |& tee ${env}.txt

visdom

  • Compute some results with your trained model

    python ./inference/run_AE_AtlasNet.py

    The trained models accessible here have the following performances, slightly better than the one reported in the paper. The number reported is the chamfer distance.

    Autoencoder : 25 learned parameterization

val_loss 0.0014795344685297
watercraft 0.00127737027906
monitor 0.0016588120616
car 0.00152693425022
couch 0.00171516126198
cabinet 0.00168296881168
lamp 0.00232362473947
plane 0.000833268054194
speaker 0.0025417242402
table 0.00149979386376
chair 0.00156113364435
bench 0.00120812499892
firearm 0.000626943988977
cellphone 0.0012117530635

Single View Reconstruction : 25 learned parameterization

val_loss 0.00400863720389
watercraft 0.00336707355723
monitor 0.00456469316226
car 0.00306795421868
couch 0.00404269965806
cabinet 0.00355917039209
lamp 0.0114094304694
plane 0.00192791500002
speaker 0.00780984506137
table 0.00368373458016
chair 0.00407004468516
bench 0.0030023689528
firearm 0.00192803189235
cellphone 0.00293665724291

Visualisation

The generated 3D models' surfaces are not oriented. As a consequence, some area will appear dark if you directly visualize the results in Meshlab. You have to incorporate your own fragment shader in Meshlab, that flip the normals in they are hit by a ray from the wrong side. An exemple is given for the Phong BRDF.

sudo mv /usr/share/meshlab/shaders/phong.frag /usr/share/meshlab/shaders/phong.frag.bak
sudo mv aux/phong.frag /usr/share/meshlab/shaders/phong.frag #restart Meshlab

Acknowledgement

The code for the Chamfer Loss was taken from Fei Xia'a repo : PointGan. Many thanks to him !

This work was funded by Adobe System and Ecole Doctorale MSTIC.

License

MIT

Old links for data

The trained models and some corresponding results are also available online :

Analytics

About

This repository contains the source codes for the paper "AtlasNet: A Papier-Mâché Approach to Learning Mesh Synthesis". The network is able to synthesize a mesh (point cloud + connectivity) from a low-resolution point cloud, or from an image.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.5%
  • C 4.2%
  • Cuda 3.1%
  • C++ 0.2%