Skip to content

samaonline/spatial-transformer-for-3d-point-clouds

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spatial Transformer for 3D Point Clouds

[Project] [Paper]

Quick Start

For quick addition of the spatial transformer to your network, refer to network architecture file of how transformer can be added, and offset_deform for the transformer implementation.

Overview

This is the author's re-implementation of
"Spatial Transformer for 3D Point Clouds", by
Jiayun WangRudrasis ChakrabortyStella X. Yu  (UC Berkeley / ICSI)  in IEEE Transactions on Pattern Analysis and Machine Intelligence.

Further information please contact Jiayun Wang.

Update notifications

  • 03/09/2019: Uploaded point-based methods for ShapeNet part segmentation.
  • 10/07/2019: Uploaded sampling-based methods for ShapeNet part segmentation.

Requirements

  • Tensorflow (for the point-based method, version >= 1.13.1)
  • CAFFE (for the sampling-based method, please use our version as we rewrite some source codes.)
  • NCCL (for multi-gpu in the sampling-based method)

Point-based Methods

Please navigate to the specific folder first.

cd point_based

Install Tensorflow and h5py

Install TensorFlow. You may also need to install h5py.

To install h5py for Python:

sudo apt-get install libhdf5-dev
pip install h5py

Data Preparation

Download the data for part segmentation.

sh +x download_data.sh

Running Examples

Train

Train the deformable spatial transformer. Specify number of gpus used with '--num_gpu'. Specify number of graphs and number of feature dimensions by '--graphnum' and '--featnum', respectively.

cd part_seg
python train_deform.py --graphnum 2 --featnum 64

Model parameters are saved every 10 epochs in "train_results/trained_models/".

Evaluation

To evaluate the model saved after epoch n,

python test.py --model_path train_results/trained_models/epoch_n.ckpt  --graphnum 2 --featnum 64

Sampling-based Methods

Install Caffe

Please use our version of CAFFE, as we provide the implementation of spatial transformers for bilateralNN, as described in the paper. A guide to CAFFE installation can be found here.

Data Preparation

Please navigate to the specific folder first.

cd sampling-based

See instructions in data/README.md.

Running Examples

* ShapeNet Part segmentation
    * train and evaluate
        ```bash
        cd sampling-based/exp/shapenet3d
        ./train_test.sh
    * test trained model
        ```bash
        cd sampling-based/exp/shapenet3d
        ./test_only.sh
        ```
        Predictions are under `pred/`, with evaluation results in `test.log`.

Benchmarks and Model Zoo

Please refer to Section 4 of the paper.

Additional Notes

The code is implemented based on Dynamic Graph CNN, BilateralNN and SplatNet.

License and Citation

The use of this software is RESTRICTED to non-commercial research and educational purposes.

@article{spn3dpointclouds,
  author    = {Jiayun Wang and
               Rudrasis Chakraborty and
               Stella X. Yu},
  title     = {Spatial Transformer for 3D Points},
  journal   = {CoRR},
  volume    = {abs/1906.10887},
  year      = {2019},
  url       = {http://arxiv.org/abs/1906.10887},
}

About

spatial transformer for 3d point clouds

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published