Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Learning Blind Motion Deblurring

TensorFlow implementation of multi-frame blind deconvolution:

Learning Blind Motion Deblurring
Patrick Wieschollek, Michael Hirsch, Bernhard Schölkopf, Hendrik P.A. Lensch
ICCV 2017

Download results from the paper. We propose to use the saccade-viewer to compare images qualitatively.



1. Get YouTube videos

The first step is to gather videos from some arbitrary sources. We use YouTube to get some videos with diverse content and recording equipment. To download these videos, we use the python-tool youtube-dl.

pip install youtube-dl --user

Some examples are given in Note, you can use whatever mp4 video you want to use for this task. In fact, for this re-implementation we use some other videos, which also work well.

2. Generate Synthetic Motion Blur

Now, we use optical flow to synthetically add motion blur. We used the most simple OpticalFlow method, wich provides reasonable results (we average frames anyway):

cd synthblur
mkdir build  && cd build
cmake ..
make all

To convert a video input.mp4 into a blurry version, run

./synthblur/build/convert "input.mp4"

This gives you multiple outputs:

  • 'input.mp4_blurry.mp4'
  • 'input.mp4_sharp.mp4'
  • 'input.mp4_flow.mp4'

Adding blur from synthetic camera shake is done on-the-fly (see

3. Building a Database

For performance reasons we randomly sample frames from all videos beforehand and store 5+5 consecutive frames (sharp+blurry) into an LMDB file (for training/validation/testing).

I use

for i in `seq 1 30`; do
    python --pattern '/graphics/scratch/wieschol/YouTubeDataset/train/*_blurry.mp4' --lmdb /graphics/scratch/wieschol/YouTubeDataset/train$i.lmdb --num 5000

for i in `seq 1 10`; do
    python --pattern '/graphics/scratch/wieschol/YouTubeDataset/val/*_blurry.mp4' --lmdb /graphics/scratch/wieschol/YouTubeDataset/val$i.lmdb --num 5000

To visualize the training examples just run

python --lmdb /graphics/scratch/wieschol/YouTubeDataset/train1.lmdb --show --num 5000


This re-implementation uses TensorPack instead of the used custom library for the paper. Starting training is done by

python --gpu 0,1 --data path/to/lmdb-files/


See the release section for full-resolution images produced by our approach.

Further experiments

We further tried a convLSTM/convGRU and a multi-scale approach (instead of the simple test from the paper). These script are available in additional_scripts.


I re-trained a slightly larger model in TensorPack just for testing the TensorPack library some months ago. It seems to have similar performance (although it is not compatible with this GitHub project). Find the inference code/weights here.

Please note, TensorFlow introduce some changes over time. This setup is tested under

  • Python 2.7
  • Tensorflow-gpu v1.9.0
  • Cuda 9.0
  • Tensorpack 0.1.6 (from 17 Feb 2017)