Skip to content

A simple, from scratch, dense neural network for classification in Julia.

Notifications You must be signed in to change notification settings

stefanbringuier/TinyNN.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TinyNN.jl

This is a simple Julia code that is setup to optimize a neural network that is fully connected (i.e., dense). The inner layers are all activated using the [ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks) function and the output layer is activated using the Sigmoid function. Therefore this type of network is good for classification type problems that are simple, for example, the MNIST database (see example).

The code is very similar to that posted by Xabier G. Andrade given thats where I got the idea to learn about Neural Networks by doing it myself.

Backwards Propagation

In order to optimize the weights/parameters of the newtork gradient descent needs to be performed. For gradient descent we need the derivative of the cost/loss function with regards to the weights/parameters. This requires invoking the chain rule from derivative calculus and providing routines to do so. At first my approach was going to be to use the automatic differentation packages ForwardDiff.jl or ReverseDiff.jl, but I couldn't quite figure out how to do this. I'm sure its a simple misunderstanding on my part so I will get around to doing this eventually. For now the backward propagation has been hardcoded in the module BackProp.jl.

About

A simple, from scratch, dense neural network for classification in Julia.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages