Skip to content


Folders and files

Last commit message
Last commit date

Latest commit



32 Commits

Repository files navigation


The tensorflow code for paper "Learning Implicit Fields for Generative Shape Modeling", Zhiqin Chen, Hao (Richard) Zhang.

Project page | Paper

Improved TensorFlow1 implementation

Improved PyTorch implementation


We have an improved implementation here, where we trained one model on the 13 ShapeNet categories.

We have a PyTorch implementation here.


We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. IM-NET is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our implicit decoder for representation learning (via IM-AE) and shape generation (via IM-GAN), we demonstrate superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.


If you find our work useful in your research, please consider citing:

  title={Learning Implicit Fields for Generative Shape Modeling},
  author={Chen, Zhiqin and Zhang, Hao},
  journal={Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},



Our code has been tested with Python 3.5, TensorFlow 1.8.0, CUDA 9.1 and cuDNN 7.0 on Ubuntu 16.04 and Windows 10.

Datasets and Pre-trained weights

The original voxel models and rendered views are from HSP. Since our network takes point-value pairs, the voxel models require further sampling. The sampling method can be found in our project page.

We provide the ready-to-use datasets in hdf5 format, together with our pre-trained network weights. The weights for IM-GAN is the ones we used in our demo video. The weights for IM-SVR is the ones we used in the experiments in our paper.

Backup links:


For data preparation, please see directory point_sampling.

To train an autoencoder, go to IMGAN and use the following commands for progressive training. You may want to copy the commands in a .bat or .sh file.

python --ae --train --epoch 50 --real_size 16 --batch_size_input 4096
python --ae --train --epoch 100 --real_size 32 --batch_size_input 8192
python --ae --train --epoch 200 --real_size 64 --batch_size_input 32768

The above commands will train the AE model 50 epochs in 163 resolution (each shape has 4096 sampled points), then 50 epochs in 323 resolution, and finally 100 epochs in 643 resolution.

To train a latent-gan, after training the autoencoder, use the following command to extract the latent codes:

python --ae

Then train the latent-gan and get some samples:

python --train --epoch 10000

You can change some lines in to adjust the number of samples and the sampling resolution.

To train the network for single-view reconstruction, after training the autoencoder, copy the weights and latent codes to the corresponding folders in IMSVR. Go to IMSVR and use the following commands to train IM-SVR and get some samples:

python --train --epoch 1000


This project is licensed under the terms of the MIT license (see LICENSE for details).


The code for paper "Learning Implicit Fields for Generative Shape Modeling".







No releases published


No packages published