Skip to content
forked from gistvision/e2sri

Code for the paper "Learning to Super Resolve Intensity Images from Events" (CVPR-2020-Oral).

License

Notifications You must be signed in to change notification settings

yonseivnl/e2sri

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Events to Super-Resolved Images (E2SRI)

This is a code repo for Learning to Super Resolve Intensity Images from Events (CVPR 2020 - Oral)
Mohammad Mostafavi, Jonghyun Choi and Kuk-Jin Yoon (Corresponding author)

E2SRI

Our extended and upgraded version produces highly consistent videos, and includes further details and experiments E2SRI: Learning to Super-Resolve Intensity Images from Events - TPAMI 2021

If you use any of this code, please cite both following publications:

  title={E2SRI: Learning to Super-Resolve Intensity Images from Events},
  author={Mostafaviisfahani, Sayed Mohammad and Nam, Yeongwoo and Choi, Jonghyun and Yoon, Kuk-Jin},
  journal={IEEE Transactions on Pattern Analysis \& Machine Intelligence},
  number={01},
  pages={1--1},
  year={2021},
  publisher={IEEE Computer Society}
}
@article{mostafavi2020e2sri,
  author  = {Mostafavi I., S. Mohammad and Choi, Jonghyun and Yoon, Kuk-Jin},
  title   = {Learning to Super Resolve Intensity Images from Events},
  journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  month   = {June},
  year    = {2020},
  pages   = {2768-2786}
}

Maintainer

Set-up

  • Make your own environment
python -m venv ./e2sri
source e2sri/bin/activate
  • Install the requirements
cd e2sri
pip install -r requirements.txt
  • Unizp pyflow
cd src
unzip pyflow.zip
cd pyflow
python3 setup.py build_ext -i

Preliminary

  • Download the linked material below

    • Sample pretrained weight (2x_7s.pth) for 2x scale (2x width and 2x height) and 7S sequences of stacks.
    • Sample dataset for training and testing (datasets.zip).
  • Unzip the dataset.zip file and put the pth weight file in the main folder

unzip dataset.zip
cd src

Inference

  • Run inference:
python test.py --data_dir ../dataset/slider_depth --checkpoint_path ../save_dir/2x_7s.pth --save_dir ../save_dir

Note that our code with the given weights (7S) consumes ~ 4753MiB GPU memory at inference.

From this sample event stack, you should produce a similar (resized) result as:

Training

  • Run training:
python3 train.py --config_path ./configs/2x_3.yaml --data_dir ../dataset/Gray_5K_7s_tiny --save_dir ../save_dir

Event Stacking

We provided a sample sequence (slider_depth.zip) made from the rosbags of the Event Camera Dataset and Simulator. The rosbag (bag) is a file format in ROS (Robot Operating System) for storing ROS message data. You can make other sequences using the given matlab m-file (/e2sri/stacking/make_stacks.m). The matlab code depends on matlab_rosbag which is included in the stacking folder and needs to be unzipped.

Note: The output image quality relies on "events_per_stack" and "stack_shift". We used "events_per_stack"=5000, however we did not rely on "stack_shift" as we synchronized with APS frames instead. The APS synchronized stacking when this 5000 setting should be kept will be released with the training code together.

Datasets

A list of publicly available event datasets for testing:

Related publications

License

MIT license.

About

Code for the paper "Learning to Super Resolve Intensity Images from Events" (CVPR-2020-Oral).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.8%
  • MATLAB 4.2%