The simplest form of an artificial neural network explained and demonstrated.
Go Dockerfile
Latest commit 3a44ef8 Jan 4, 2020
Type Name Latest commit message Commit time
Failed to load latest commit information.
src
tex
.gitignore
.replit Jan 4, 2020
Dockerfile

# Simplest artificial neural network

This is the simplest artificial neural network possible explained and demonstrated.

## Theory

### Mimicking neurons

Artificial neural networks are inspired by the brain by having interconnected artificial neurons store patterns and communicate with each other. The simplest form of an artificial neuron has one or multiple inputs each having a specific weight and one output .

At the simplest level, the output is the sum of its inputs times its weights.

### A simple example

Say we have a network with two inputs and and two weights and .

The idea is to adjust the weights in such a way that the given inputs produce the desired output.

Weights are normally initialized randomly since we can't know their optimal value ahead of time, however for simplicity we will initialize them both with .

Then the output will be

### The error

If the output doesn't match the expected result, then we have an error.
For example, if we wanted to get an expected output of then we would have a difference of

The most common way to measure the error is to use the square difference:

If we had multiple associations of inputs and expected outputs, then the error becomes the sum of each association.

To rectify the error, we would need to adjust the weights in a way that the actual output matches the expected output. In our example, lowering from to would do the trick, since

However, in order to adjust the weights of our neural networks for many different inputs and expected outputs, we need a learning algorithm.

The idea is to use the error in order to adjust each weight so that the error is minimized.

It's essentially a vector pointing to the direction of the steepest ascent of a function. The gradient is denoted with and is simply the partial derivative of each variable of a function expressed as a vector.

Example for a two variable function:

The descent part simply means using the gradient to find the direction of steepest ascent of our function and then going in the opposite direction by a small amount many times to find the function global minimum.

We use a constant called the learning rate, denoted with to define how small of a step to take in that direction.

If is too large, then we risk overshooting the function minimum, but if it's too low then the network will take longer to learn and we risk getting stuck in a local minimum.

##### Gradient descent applied to our example network

For our two weights and we need to find the gradient of those weights with respect to the error function

For both and , we can find the gradient by using the chain rule

From now on we will denote the as the term.

Once we have the gradient, we can update our weights

And we repeat this process until the error is minimized within a chosen threshold.

## Code example

The included example teaches the following dataset to a neural network with two inputs and one output using gradient descent:

Once learned, the network should output ~0 when given two s and ~ when given a and a .

### How to run

#### Docker

docker build -t simplest-network .
docker run --rm simplest-network

## References

1. Artificial intelligence engines by James V Stone (2019)
2. Complete guide on deep learning: http://neuralnetworksanddeeplearning.com/chap2.html
You can’t perform that action at this time.