Skip to content

Yi-Shi94/Unsupervised-Deep-Shape-Descriptor-with-Point-Distribution-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unsupervised Deep Shape Descriptor with Point Distribution Learning

   This repository contains sampling code for the 'Unsupervised Deep Shape Descriptor with Point Distribution Learning'. The code has just been reoredered and separated from another project. I hope this repo can be helpful for those want to run the method on customized settings. Feel free to contact me for any issue related to the code.

Useful Links

Overview

   This work focuses on unsupervised 3D point cloud descriptor/feature computation. The proposed learning based approach treats each point as a Gaussian and introduces an operation called 'Gaussian Sampling' which applies multiple 'disruptions' to each point. Then an encoder-free model is leveraged to model a Maximum Likelihood Estimation process where the parameters of each point Gaussian are predicted. (i.e to guess the mean which is also the original location of each point), through which the geometric information of the shape is learned. On the other hand, it can also be viewed as a non rigid self-registration process.

Data and Experiment

    On contrary to using the entire 55 categories and 57,000 data that forms Shapenet55 from ShapeNet for training the model, we follow the setting as in 3DGAN where only seven categories from the ShapeNet are used in training. The evaluation is done on the ModelNet40 benchmark. We provide processed partial data in the following link:

Training: A subset consists of 7 categories from ShapeNet.
Evaluation: ModelNet40 Aligned

In the ablation study (reconstruction, multiscale, roatation & noise invariance), the model is trained on 7 major categories from ShapeNet. The experiment is then conducted within our ShapeNet evaluation set with 9 extra held-out categories. The evaluation for classification uses the official train/test split. The classifier used in the final evaluation of computed descriptors is an MLP.

Training and testing Details

    Given the nature of our encoder-free decoder structure, both the decoder network and the descriptors are computed in an optimization manner. Therefore, the entire pipline involves two phases: decoder model training and descriptor computation.

During model training, we use 7 major categories from ShapeNet. Beware that the learning rate for the descriptors should be higher than that with the decoder so that descriptors are forced to capture information rather than overfitting the decoder model itself.

During descriptor computation, the descriptors obtained in the previous model training will be discarded. The learning rate of the descriptor will be set higher than previous stage for fast convergence while the model parameters remain fixed.

For each dataset involved, the hyper parameters should be tuned for the optimal performance. The evaluation over generated descriptors is performed with the default function provided by sklearn.

Dependencies

We used Pytorch 1.3 for our model implementation.

-pytorch

-matplotlib

-numpy

-sklearn

-open3d

-tqdm

Reference

@InProceedings{Xu_Shi_2020_CVPR,
   author = {Shi, Yi and Xu, Mengchen and Yuan, Shuaihang and Fang, Yi},
   title = {Unsupervised Deep Shape Descriptor With Point Distribution Learning},
   booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
   month = {June},
   year = {2020}
}

About

Unsupervised Deep Shape Descriptor with Point Distribution Learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published