A collection of TensorFlow (Tensorpack) implementations of recent deep learning approaches including pretrained models.
Switch branches/tags
Nothing to show
Clone or download
Latest commit 33962bb Aug 24, 2018

README.md

TensorFlow-Recipes (Tensorpack-Recipes)

Several TensorFlow implementations of recent papers based on the tensorpack framework.

Unfortunately, there is a difference between re-implementing deep-learning papers, and re-obtaining the published performance. The latter usually requires tedious hyper-parameter optimization amongst other things like very long training times. Hence, the following implementations have no guarantees to get the published performance. However you can judge this yourself using our pretrained models.

  • PWC (Sun et al., CVPR 2018) [pdf] [model PWC] PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume
    • there is some color bleeding in the predicted flow
    • just the inference part of PWC
  • Learning To See in the Dark (Chen et al., CVPR 2018) [pdf] [pretrained model] Learning to See in the Dark
    • the toughest part seems the data pre-processing
    • there are some over-exposed pixels in the prediction
  • ProgressiveGrowingGan (Karras et al., ICLR 2018) [pdf] Progressive Growing of GANs for Improved Quality, Stability, and Variation
    • seems to produce visual good performance on smaller resolutions due to hardware constraints
    • uses no gradient clipping (forgot to activate) and RMSprop
  • EnhanceNet (Sajjadi et al., ICCV 2017) [pdf] [pretrained model] EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis
    • visually similar performance; seems to produce less artifacts than the author's implementation
  • FlowNet2 (Ilg et al., CVPR 2017) [pdf] [model FlowNet2-S] [model FlowNet2-C] [model FlowNet2] FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
    • just the inference part of FlowNet2-S, FlowNet2-C, FlowNet2
    • please respect the license of the pre-trained weights
    • this TensorFlow version gets AEE(train) 2.10, while authors reported 2.03
  • LetThereBeColor (Iizuka et al., SIGGRAPH 2016) [pdf] [pretrained model] Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification
    • slightly worse performance probably due to shorter training time (authors reported 3 weeks; we just trained a few days)
  • DeepVideoDeblurring (Su et al., CVPR 2017) [pdf] Deep Video Deblurring
    • similar performance, when trained on our dataset
  • SplitBrainAutoEncoder (Zhang et al., CVPR 2017) [pdf] Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction
    • not finished yet
  • PointNet (Qi et al., CVPR 2017) [pdf] [pretrained model] PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
    • reproduces the accuracy from the paper
    • use dataset provided by the authors
  • SubPixelSuperResolution (Shi et al., CVPR 216) [pdf] Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
    • not reproduced yet, might be cause by the resizing method (PIL vs OpenCV, vs TensorFlow)
  • ImageRestorationSymmetricSkip (Mao et al., NIPS 2016 [pdf] Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections
    • slightly worse performance
  • AlphaGo (Silver et al, Nature 2016) [pdf]
    • just the Policy-Network (SL) from AlphaGO
    • validation accuracy is ~51% (paper reports 54%)
  • DynamicFilterNetwork (Brabandere et al., NIPS 2016) [pdf] Dynamic Filter Network
    • reproduces the steering filter example

I do not judge the papers and methods. Reproducing deep-learning papers with meaningful performance is difficult. So there can be some tricks, I missed. There is no motivation/time to make them all work perfectly -- when possible.

  • model means a pre-trained model provided by the authors and ported to TensorFlow
  • pre-trained model means a model training with the provided script above