Seeing Motion in the Dark
Code and Data
Required python (version 2.7) libraries: Tensorflow (1.8.0) + Scipy + Numpy + Rawpy + OpenCV (4.1.0).
Tested in Ubuntu 16.04 + Nvidia Tesla V100 32 GB with Cuda (>=9.0) and CuDNN (>=7.1). CPU mode should also work with minor changes but not tested.
Testing and training the models
To retrain a new model, run:
python download_VGG_models.py python train.py
To generate the 5th frame of each video, run
To generate the videos, run
By default, the code takes the data in the
./DRV/ and the output folder is
Original sensor raw data
If you use our code and dataset for research, please cite our paper:
Chen Chen, Qifeng Chen, Minh N. Do, and Vladlen Koltun, "Seeing Motion in the Dark", in ICCV, 2019.
- Can I test my own data using the provided model?
The proposed method is designed for sensor raw data. The pretrained model probably not work for data from another camera sensor. We do not have support for other camera data. It also does not work for images after camera ISP, i.e., the JPG or PNG data.
- Will this be in any product?
This is a research project and a prototype to prove a concept.
- How can I train the model using my own raw data?
Generally, you will need to pre-process your data in a similar way. That is black level subtraction, packing, applying target gain and run some pre-defined temporal filters. The test data should be pre-processed in the same way.
- What if my GPU memory is too small to train model?
We provided a
pretrain_on_small.py for small memory GPUs. After the training on small resolution, you will need to finetune it on CPU using the
train.py (modify the epoch and learning rate to make it continue training).
If you have addtional questions after reading the FAQ, please email to email@example.com.