Skip to content
/ DA-RNN Public

Semantic Mapping with Data Associated Recurrent Neural Networks

License

Notifications You must be signed in to change notification settings

yuxng/DA-RNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DA-RNN: Semantic Mapping with Data Associated Recurrent Neural Networks

Created by Yu Xiang and Tanner Schmidt at RSE-Lab at University of Washington.

Introduction

We introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. arXiv, Video

DA-RNN

License

DA-RNN is released under the MIT License (refer to the LICENSE file for details).

Citation

If you find DA-RNN useful in your research, please consider citing:

@inproceedings{xiang2017darnn,
    Author = {Yu Xiang and Dieter Fox},
    Title = {DA-RNN: Semantic Mapping with Data Associated Recurrent Neural Networks},
    Booktitle = {Robotics: Science and Systems (RSS)},
    Year = {2017}
}

Installation

DA-RNN consists a reccurent neural network for semantic labeling on RGB-D videos and the KinectFusion module for 3D reconstruction. The RNN and KinectFusion communicate via a Python interface.

  1. Install TensorFlow. I suggest to use the Virtualenv installation.

  2. Compile the new layers under $ROOT/lib we introduce in DA-RNN.

    cd $ROOT/lib
    sh make.sh
  3. Compile KinectFusion with cmake. Unfortunately, this step requires some effort.

    Install dependencies of KinectFusion:

    cd $ROOT/lib/kinect_fusion
    mkdir build
    cd build
    cmake ..
    make
  4. Compile the Cython interface for RNN and KinectFusion

    cd $ROOT/lib
    python setup.py build_ext --inplace
  5. Add the KinectFusion libary path

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ROOT/lib/kinect_fusion/build
  6. Download the VGG16 weights from here (57M). Put the weight file vgg16_convs.npy to $ROOT/data/imagenet_models.

Tested environment

  • Ubuntu 16.04
  • Tensorflow 1.2.0
  • CUDA 8.0

Running on the RGB-D Scene dataset

  1. Download the RGB-D Scene dataset from here (5.5G).

  2. Create a symlink for the RGB-D Scene dataset

    cd $ROOT/data/RGBDScene
    ln -s $RGB-D_scene_data data
  3. Training and testing on the RGB-D Scene dataset

    cd $ROOT
    
    # train and test RNN with different input (color, depth, normal and rgbd)
    ./experiments/scripts/rgbd_scene_multi_*.sh $GPU_ID
    
    # train and test FCN with different input (color, depth, normal and rgbd)
    ./experiments/scripts/rgbd_scene_single_*.sh $GPU_ID
    

Running on the ShapeNet Scene dataset

  1. Download the ShapeNet Scene dataset from here (2.3G).

  2. Create a symlink for the ShapeNet Scene dataset

    cd $ROOT/data/ShapeNetScene
    ln -s $ShapeNet_scene_data data
  3. Training and testing on the RGB-D Scene dataset

    cd $ROOT
    
    # train and test RNN with different input (color, depth, normal and rgbd)
    ./experiments/scripts/shapenet_scene_multi_*.sh $GPU_ID
    
    # train and test FCN with different input (color, depth, normal and rgbd)
    ./experiments/scripts/shapenet_scene_single_*.sh $GPU_ID
    

Using Our Trained Models

  1. You can download all our trained tensorflow models on the RGB-D Scene dataset and the ShapeNet Scene dataset from here (3.1G).

    # an exmaple to test the trained model
    ./experiments/scripts/rgbd_scene_multi_rgbd_test.sh $GPU_ID
    

About

Semantic Mapping with Data Associated Recurrent Neural Networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published