Skip to content
This contains an explorational implementation of learning in neural networks, as well as some simple helper classes (automatic differentation, statistics, timing). It covers the RProp algoritm and an experiment to extend the gradient descent idea with using the second derivation to automatically select a good step width.
Scala Prolog
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
src
.gitignore
TODO.txt
build.sbt
nnDerivation.pl
You can’t perform that action at this time.