Skip to content

Johswald/Bayesian-FlowNet

Repository files navigation

Bayesian FlowNetS in Tensorflow

Tensorflow implementation of optical flow predicting FlowNetS by Alexey Dosovitskiy et al.

The network can be equipped with dropout layers to produce confidence images through MC dropout after training, as introduced here. The positions of dropout layers are very similar to other encoder-decoder architectures such as Bayesian SegNet or Deep Depth From Focus.

The confidence images are then used to improve results (with limited success) through post-processing through the Fast Bilateral Solver.

Training

The architecture is trained on the FlyingChairs dataset, please feel free to provide a Tensorflow reader of the used .ppm images. To enable fast reading here, the images were first transformed to .jpg.

To get similar results as reported below, just start training by

python train.py --datadir /path/to/FlyingChairs/ 

and in the folder FlyingChairs/ you have simply have the ~27k numbered -img1.jpg, -img2.jpg, -.flo training files (note .jpg). To incorporate dropout layers, simply

python train.py --datadir /path/to/FlyingChairs/ --dropout True

Check standard hyperparameters in train.py, note that the results are sensitive to the "amount" of data augmentation you use. Training loss looks somthing like:

Data Augmentation

Heavy data augmentation is used to improve generalization/ performance.
Check flownet.py for

  • chromatic augmentation
  • geometric augmentation (rotation + translation)

Please note that when we flip, crop, rotate and scale, we must be careful and change the flow directions (u,v) according to the change of pixels (x, y).

Loss

L1 loss is calculated multiple times while decoding, we must "downsample" the original flow which is done through a weighted average in the original caffe version. Here, simple bilinear interpolation is used which could have negative effects on performance.

Evaluation

There are evaluation scripts for FlyingChairs, Sintel (clean / final) and Kitti datasets provided, e.g.

python eval_var_flownet_s.py --dropout True/False

They either evaluate scaling the weights to fixed weights magnitudes after dropout training, parameters:

--dropout True / --is_training False

or

by loading one test example and creating a minibatch (of size = FLAGS.batchsize) of the same image
and average results of the minibatch, parameters:

--dropout True / --is_training True

Note that is_training is falsely named due to simplicity. Through variances of the minibatches results on the same image but inference on "different" models, confidence images can be created. Evaluation throughout training on FlyingCharis test set (pink) ad well as Sintel Clean (orange), Sintel Final (gray) and Kitti (blue) training sets.

Evaluation

Training images as well as groundtruth, flow estimation, confidence and error images. Two examples of the FlyingChairs dataset:

Groundtruth images:

Predicted flow images:

Confidence images:

Error images (note similiarities/differences to confidence images):

About

Bayesian FlowNetS in Tensorflow

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages