Skip to content

Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

License

uzh-rpg/rpg_ramnet

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
doc
 
 
 
 
 
 

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction

This is the code for the paper Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction by Daniel Gehrig*, Michelle Rüegg*, Mathias Gehrig, Javier Hidalgo-Carrió, and Davide Scaramuzza:

You can find a pdf of the paper here and the project homepage here. If you use this work in an academic context, please cite the following publication:

@Article{RAL21Gehrig,
  author        = {Daniel Gehrig, Michelle Rüegg, Mathias Gehrig, Javier Hidalgo-Carrio and Davide Scaramuzza},
  title         = {Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction},
  journal       = {{IEEE} Robotic and Automation Letters. (RA-L)},
  url           = {http://rpg.ifi.uzh.ch/docs/RAL21_Gehrig.pdf},
  year          = 2021
}

If you use the event-camera plugin go to CARLA, please cite the following publication:

@Article{Hidalgo20threedv,
  author        = {Javier Hidalgo-Carrio, Daniel Gehrig and Davide Scaramuzza},
  title         = {Learning Monocular Dense Depth from Events},
  journal       = {{IEEE} International Conference on 3D Vision.(3DV)},
  url           = {http://rpg.ifi.uzh.ch/docs/3DV20_Hidalgo.pdf},
  year          = 2020
}

Install with Anaconda

The installation requires Anaconda3. You can create a new Anaconda environment with the required dependencies as follows (make sure to adapt the CUDA toolkit version according to your setup):

conda create --name RAMNET python=3.7
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
pip install tb-nightly kornia scikit-learn scikit-image opencv-python

Branches

To run experiments on Event Scape plese switch to the main branch

git checkout main

To run experiments on real data from MVSEC, switch to asynchronous_irregular_real_data.

git checkout asynchronous_irregular_real_data

Checkpoints

The checkpoints for RAM-Net can be found here:

EventScape

This work uses the EventScape dataset which can be downloaded here:

Video to Events

Qualitative results on MVSEC

Here the qualitative results of RAM-Net against state-of-the-art is shown. The video shows MegaDepth, E2Depth and RAM-Net in the upper row, image and event inputs and depth ground truth in the lower row.

Video to Events

Using RAM-Net

A detailed description on how to run the code can be found in the README in the folder /RAM_Net. Another README can be found in /RAM_Net/configs, it describes the meaning of the different parameters in the configs.

About

Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages