Learning Scene Flow in Point Clouds through Voxel Grids
This work was done as part of my Guided Research at the Visual Computing Lab at TUM under the supervision of Prof. Matthias Niessner. For more info on the project, check my report.
Author: Pablo Rodriguez Palafox
Supervisor: Prof. Matthias Niessner
Visual Computing Group at TUM
Technical University Munich
This code was tested with Pytorch 1.0.0, CUDA 10.0 and Ubuntu 16.04.
You can set your own python environment and install the required dependencies using the environment.yaml.
The data preprocessing tools are in generate_dataset
. Firts download the raw FlyingThings3D dataset. We need flyingthings3d__disparity.tar.bz2
, flyingthings3d__disparity_change.tar.bz2
, flyingthings3d__optical_flow.tar.bz2
and flyingthings3d__frames_finalpass.tar
. Then extract the files in /path/to/flyingthings3d
and make sure that the directory looks like so:
/path/to/flyingthings3d
disparity/
disparity_change/
optical_flow/
frames_finalpass/
Then cd
into generate_dataset
and execute the following command:
python generate_Flying.py
Make sure that the do_train
flag is set to True
in config.py
. Also configure the number of epochs
, batch_sizes
and the path to the processed dataset as well as to where the model should be saved.
By setting OVERFIT
to True
you can just overfit to some examples, which can be set in sequences_to_train
.
python main.py
In config.py
set the do_train
flag to False
. Set model_dir_to_use_at_eval
to the name of the model you want to evaluate.
python main.py
Our code is released under MIT License (see LICENSE file for details).