Skip to content
tensorflow implementation of CondenseNet: An Efficient DenseNet using Learned Group Convolutions
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data first Jan 26, 2018
README.md Change error rate Jan 29, 2018
cifar10.py first Jan 26, 2018
experiment.py simplify Feb 1, 2018
main.py add parameters Feb 1, 2018
model.py get rid of bias Feb 1, 2018

README.md

CondenseNet tensorflow

Tensorflow implementation of CondenseNet: An Efficient DenseNet using Learned Group Convolutions. The code is tested with cifar10, inference phase not implemented yet.

Model architecture

Official PyTorch implementation by @ShichenLiu here.

Prerequisites

Data

Preparation

  • Go to data/ folder and run python2 generate_cifar10_tfrecords.py --data-dir=./cifar-10-data. This code is directly borrowed from tensorflow official repo and have to be run with python 2.7+.

Train

Use default parameters:

python main.py

Check out tunable hyper-parameters:

python main.py --help

Other parameters including stages, groups, condense factor, and growth rate are in experiment.py.

Notes

  • Training for 300 epochs with the default settings reach testing accuracy 93.389% (paper report is 94.94%). There might be some details I didn't notice, feel free to point them out.
  • All the default parameters settings follow the paper/official pytorch implementation.
  • Current implmentations of standard group convolution and learned group convolution are very inefficient (a bunch of reshape, transpose and concat), looking for help to build much more efficient graph.
  • Evaluation phase (index select) has not been implemented yet, looking for potential help as well :D.
  • Issues are welcome!

Resources

You can’t perform that action at this time.