Dual Path Networks on cifar-10 and fashion-mnist datasets
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
doc Init Commit Aug 30, 2017
models Init Commit Aug 30, 2017
scripts Init Commit Aug 30, 2017
.gitignore Init Commit Aug 30, 2017
LICENSE Init Commit Aug 30, 2017
README.md Initial commit Aug 30, 2017
fashion_mnist.py Init Commit Aug 30, 2017
train_cifar.py Update train_cifar.py Aug 31, 2017
train_fashion_mnist.py Update train_fashion_mnist.py Aug 31, 2017
utils.py Init Commit Aug 30, 2017


Dual Path Networks on cifar-10 and fashion-mnist datasets.

We construct a dual path network with WideResNet28-10 as the backbone network. The growth rate of densenet structure in the three convolutional stages are 16, 32 and 64, respectively. For details of the dual path network and wide resnet, please refer to [1] and [2]. We call this model DualPathNet28-10, which has 47.75M parameters (WideResNet28-10 has 37.5M parameters).
The implementation details are as in [2]. We did not fine-tune the hyperparameters. You might get better results after fine-tuning.

Results on cifar-10:

Accuracy: 96.35 vs 96.10(WideResNet28-10)

Results on fashion-mnist:

Accuracy: 95.73


More dual path networks.
Welcome to make contributions!


pytorch http://pytorch.org/
tensorboard https://www.tensorflow.org/get_started/summaries_and_tensorboard
tensorboard-pytorch https://github.com/lanpa/tensorboard-pytorch

How to Run:

# cd to the /scripts folder.
cd /path-to-this-repository/scripts  
# run the shells.
sh dualpath28-10.sh


Code in fashion_mnist.py are based on https://github.com/kefth/fashion-mnist/blob/master/fashionmnist.py
All rights belongs to the original author.


[1] Chen, Yunpeng, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, and Jiashi Feng. "Dual Path Networks." arXiv preprint arXiv:1707.01629 (2017).
[2] Zagoruyko, Sergey, and Nikos Komodakis. "Wide residual networks." arXiv preprint arXiv:1605.07146 (2016).