Skip to content
Comparison of different toy network implementations for MNIST classification
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
1_mnist_LIN.py
2_mnist_NN.py
3_mnist_CNN.py
4_mnist_CNN2.py
LICENSE
README.md

README.md

MNIST-comparison

Comparison of different toy network implementations for MNIST classification with TensorFlow (28x28 grayscale images)

  • Linear Classifier: Linearly map the 28*28-simensional input directly with to the 10 outputs (7,850 parameters, ~92.3% test accuracy).

  • Simple Feed-Forward Neural Network: Neural network with one hidden layer of 200 units (159,010 parameters, ~97.8% test accuracy).

  • Simple Convolutional Neural Network: Neural network with one convolutional layer of 32 5x5 filters and one average pooling layer (46,922 parameters, ~98.6% test accuracy).

  • Advanced Convolutional Neural Network: Neural network with three convolutional layers (32, 64, and 64 filters of size 3x3), two max pooling layers in-between, and one dense layer with 64 units before the output layer (93,322 parameters, ~99.1% test accuracy).

For a comprehensive list of results on MNIST classification see The MNIST database of handwritten digits by Yann LeCun.

You can’t perform that action at this time.