Skip to content
Generic implementations of neural network models and training alogs from scratch
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
neural_nets
results
.gitignore
.travis.yml
README.md
setup.py

README.md

Implementations of Neural Network models and training algorithms from scratch [WIP]

Models and Algos

  • Feedforward network of arbitrary size and activation functions

  • Backprop with arbitrary cost function

  • Feedback alignment based training for FNNs [1]

  • Recurrent network with arbitrary size and activtion functions

  • Backprop through time [WIP]

  • RNN traninng with feedback alignment [2]

Dependencies

Numpy and Matplotlib - planning to use JAX for some of the grad operations.

References

[1] Lillicrap, Timothy P., et al. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications 7 (2016): 13276

[2] Murray, J. M. (2018). Local online learning in recurrent networks with random feedback. BioRxiv, 458570.

You can’t perform that action at this time.