Skip to content
FlowNet3D: Learning Scene Flow in 3D Point Clouds
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
data_preprocessing First commit Apr 15, 2019
doc First commit Apr 15, 2019
tf_ops First commit Apr 15, 2019
utils changed module name to coincide with paper, deleted unused module Apr 16, 2019
LICENSE Update LICENSE Apr 18, 2019 updated readme May 12, 2019 First commit Apr 15, 2019 First commit Apr 15, 2019 First commit Apr 15, 2019 First commit Apr 15, 2019 Fixed a typo May 6, 2019 First commit Apr 15, 2019

FlowNet3D: Learning Scene Flow in 3D Point Clouds

Created by Xingyu Liu, Charles R. Qi and Leonidas J. Guibas from Stanford University and Facebook AI Research (FAIR).


If you find our work useful in your research, please cite:

      title={FlowNet3D: Learning Scene Flow in 3D Point Clouds},
      author={Liu, Xingyu and Qi, Charles R and Guibas, Leonidas J},


Many applications in robotics and human-computer interaction can benefit from understanding 3D motion of points in a dynamic environment, widely noted as scene flow. While most previous methods focus on stereo and RGB-D images as input, few try to estimate scene flow directly from point clouds. In this work, we propose a novel deep neural network named FlowNet3D that learns scene flow from point clouds in an end-to-end fashion. Our network simultaneously learns deep hierarchical features of point clouds and flow embeddings that represent point motions, supported by two newly proposed learning layers for point sets. We evaluate the network on both challenging synthetic data from FlyingThings3D and real Lidar scans from KITTI. Trained on synthetic data only, our network successfully generalizes to real scans, outperforming various baselines and showing competitive results to the prior art. We also demonstrate two applications of our scene flow output (scan registration and motion segmentation) to show its potential wide use cases.


Install TensorFlow. The code is tested under TF1.9.0 GPU version, g++ 5.4.0, CUDA 9.0 and Python 3.5 on Ubuntu 16.04. There are also some dependencies for a few Python libraries for data processing and visualizations like cv2. It's highly recommended that you have access to GPUs.

Compile Customized TF Operators

The TF operators are included under tf_ops, you need to compile them first by make under each ops subfolder (check Makefile). Update arch in the Makefiles for different CUDA Compute Capability that suits your GPU if necessary.


Flyingthings3d Data preparation

The data preprocessing scripts are included in data_preprocessing. To process the raw data, first download FlyingThings3D dataset. flyingthings3d__disparity.tar.bz2, flyingthings3d__disparity_change.tar.bz2, flyingthings3d__optical_flow.tar.bz2 and flyingthings3d__frames_finalpass.tar are needed. Then extract the files in /path/to/flyingthings3d such that the directory looks like


Then cd into directory data_preprocessing and execute command to generate .npz files of processed data

python --input_dir /path/to/flyingthings3d --output_dir data_processed_maxcut_35_20k_2k_8192

The processed data is also provided here for download (total size ~11GB).

Training and Evaluation

To train the model, simply execute the shell script Batch size, learning rate etc are adjustable.


To evaluate the model, simply execute the shell script


KITTI Experiment

To be released. Stay Tuned.


Our code is released under MIT License (see LICENSE file for details).

Related Projects

You can’t perform that action at this time.