A simple implementation of a feed-forward neural network with minimal dependencies.
The package defines a standard neural network NN class and several modules:
act: Activation functions and their derivatives. Defines theActivationclass.loss: Error functions and their derivatives. Defines theLossclass.opt: Optimization functions. Defines theOptimizerclass.layers: Densly connected n-dimensional layers. Defines theLayerclass.
Currently, implemented optimizers are:
- Momentum
- Standard Gradient Descent
Activations available:
- tanh
- sigmoid
- ReLU
- LeakyReLU
- Linear
Error measures:
- Squared error
All modules are extensively commented and provide function and class signatures for modifications.
The notebook Learning logic gates.ipynb contains examples of usage.
Implementing a simple XOR gate:
Input Hidden Output
[]------[]--\
X >---[]
[]------[]--/
import numpy as np
from poormansnn import NN, Layer, loss, act, opt
# Define hyperparameters and architecture
batchsize = 40
epochs = 500
layers = [Layer(shape=(2,), prevshape=(2,), act.Tanh()),
Layer(shape=(1,), prevshape=(2,), act.Tanh())]
error = loss.SquaredLoss()
rate = 1.2
optimizer = opt.Optimizer(rate)
# Construct the network
n = NN(layers, error=error, optimizer=optimizer)
# Specify training data and labels. Each instance has the same dimension as
# the network's input layer. Each label has the same dimension as the network's
# output layer.
X = np.array([[0,0],
[0,1],
[1,0],
[1,1]])
Y = np.array([[0],
[1],
[1],
[0]])
# Train the network
errors, _ = n.train(X, Y, batchsize, epochs, train=(Y, X))
print(np.round(n.predict(X)))
