🤖 bare-bones implementation of a neural network from scratch, backprop, momentum, dropout; other bells & whistles
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
.ipynb_checkpoints
convolutions
data
utils
writing
.gitignore
NNRun.py
NeuralNetDemo.ipynb
NeuralNetwork.py
README.md
__init__.py
load_mnist.py
nntest.py
optimizers.py
rnn

README.md

neuralnets

Implementations and experiments with Neural Networks. All of this code will be ported to Python 3 shortly. The main file of importance is neuralnetwork.py. It contains a from-scratch implementation of a neural network with a single hidden layer, without the use of any external libraries other than numpy.

The network achieves about 96.5% accuracy on MNIST if the hyperparameters are tuned.

The implementation has a few of the bells and whistles that help neural networks learn better:

  • An option for using stochastic gradient descent with minibatch learning
  • Decaying the learning rate during training
  • Implementation of the momentum method for better convergence & less oscillations
  • Nesterov momentum
  • L2 regularization to promote smaller weights
  • The dropout method that discards hidden layer activations to prevent overfitting.

Another interesting file is utils/utils.py. It contains several of my machine learning utilities for one-hot encoding, k-fold cross-validation, splitting datasets, and hyperparameter tuning.