Skip to content
Browse files

Added note to mini-batch training, thanks to @Myasuka.

  • Loading branch information...
davidstutz committed Jul 25, 2015
1 parent 141b44e commit 153b0c4143458e4351ecc04524003bd0031c650b
Showing with 3 additions and 1 deletion.
  1. +3 −1
@@ -2,6 +2,8 @@

In course of a seminar on “Selected Topics in Human Language Technology and Pattern Recognition”, I wrote a seminar paper on neural networks: "Introduction to Neural Networks". The seminar paper and the slides of the corresponding talk can be found in my blog article: [Seminar Paper “Introduction to Neural Networks”]( Background on neural networks and the two-layer perceptron can be found in my seminar paper.

**Update:** The code can be adapted to allow mini-batch training as done in [this fork](

## MNIST Dataset

The [MNIST dataset]( provides a training set of 60,000 handwritten digits and a validation set of 10,000 handwritten digits. The images have size 28 x 28 pixels. Therefore, when using a two-layer perceptron, we need 28 x 28 = 784 input units and 10 output units (representing the 10 different digits).
@@ -90,4 +92,4 @@ but WITHOUT ANY WARRANTY; without even the implied warranty of
GNU General Public License for more details.

See <>.
See <>.

0 comments on commit 153b0c4

Please sign in to comment.
You can’t perform that action at this time.