Latest commit 96f8f96 Nov 15, 2018
Permalink
..
Failed to load latest commit information.
A3C-Gym fix mistake in readme (fix #903) Sep 21, 2018
CTC-TIMIT make dataflow idiomatic python container objects (fix #869) (#872) Aug 31, 2018
CaffeModels rewrite VGG16 as well (#835) Jul 20, 2018
Char-RNN make dataflow idiomatic python container objects (fix #869) (#872) Aug 31, 2018
ConvolutionalPoseMachines Move Caffe models together. Mar 8, 2018
DeepQNetwork DQN supports gym as well. Sep 17, 2018
DisturbLabel make dataflow idiomatic python container objects (fix #869) (#872) Aug 31, 2018
DoReFa-Net Use clip(0, 1) in svhn-dorefa to match alexnet-dorefa (fix #920) Oct 5, 2018
DynamicFilterNetwork update docs Sep 17, 2018
FasterRCNN update docs Nov 15, 2018
GAN Fix reference leak in call_only_once, use memoized_method for methods. ( Nov 6, 2018
HED make dataflow idiomatic python container objects (fix #869) (#872) Aug 31, 2018
ImageNetModels update docs Nov 2, 2018
OpticalFlow tower_func option in InferenceRunner Sep 27, 2018
PennTreebank fix horovod trainer broadcast stage again Aug 30, 2018
ResNet update docs Nov 2, 2018
Saliency suppress some prctl warnings Sep 19, 2018
ShuffleNet Move ImageNet models together Mar 8, 2018
SimilarityLearning make dataflow idiomatic python container objects (fix #869) (#872) Aug 31, 2018
SpatialTransformer make dataflow idiomatic python container objects (fix #869) (#872) Aug 31, 2018
SuperResolution update docs; fix #975 Nov 7, 2018
basics Check global_step in MinSaver (fix #966) Nov 5, 2018
keras Pre/Post processing in ImageNetModel Sep 25, 2018
README.md add flownet2 inference examples (#853) Aug 24, 2018
boilerplate.py make dataflow idiomatic python container objects (fix #869) (#872) Aug 31, 2018
tox.ini update docs Oct 30, 2018

README.md

Tensorpack Examples

Training examples with reproducible performance.

The word "reproduce" should always mean reproduce performance. With the magic of SGD, wrong deep learning code often appears to work, especially if you try it on toy datasets. Github is full of such deep learning code that "implements" but does not "reproduce" methods. See Unawareness of Deep Learning Mistakes.

We refuse toy examples. Instead of showing you 10 arbitrary networks trained on toy datasets with random final performance, tensorpack examples try to faithfully replicate experiments and performance in the paper, so you're confident that they are correct.

Getting Started:

These are all the toy examples in tensorpack. They are supposed to be just demos.

Vision:

Name Performance
Train ResNet, ShuffleNet and other models on ImageNet reproduce paper
Train Faster-RCNN / Mask-RCNN on COCO reproduce paper
Generative Adversarial Network(GAN) variants, including DCGAN, InfoGAN,
Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN
visually reproduce
DoReFa-Net: training binary / low-bitwidth CNN on ImageNet reproduce paper
Fully-convolutional Network for Holistically-Nested Edge Detection(HED) visually reproduce
Spatial Transformer Networks on MNIST addition reproduce paper
Visualize CNN saliency maps visually reproduce
Similarity learning on MNIST
Single-image super-resolution using EnhanceNet
Learn steering filters with Dynamic Filter Networks visually reproduce
Load a pre-trained AlexNet, VGG, or Convolutional Pose Machines
Load a pre-trained FlowNet2-S, FlowNet2-C, FlowNet2

Reinforcement Learning:

Name Performance
Deep Q-Network(DQN) variants on Atari games, including
DQN, DoubleDQN, DuelingDQN.
reproduce paper
Asynchronous Advantage Actor-Critic(A3C) on Atari games reproduce paper

Speech / NLP:

Name Performance
LSTM-CTC for speech recognition reproduce paper
char-rnn for fun fun
LSTM language model on PennTreebank reproduce reference code