Skip to content

Commit

Permalink
Neural network library and bug fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
Pencilcaseman committed Oct 19, 2020
1 parent b0e8e2f commit 62409d5
Show file tree
Hide file tree
Showing 41 changed files with 734 additions and 64 deletions.
19 changes: 5 additions & 14 deletions README.md
Expand Up @@ -15,20 +15,11 @@ Run ```pip install libpymath``` to dowload and install ```libpymath``` for your
---

## Features and usage
### Matrix math
### Matrix Math
Easily create, manipulate and perform calculations with dense matrices. The matrices themselves are stored and manipulated with optimised C code, resulting in faster operations and more efficient calculations. To further increase the speed of the calculations, when libpymath imported into a project for the first time, it runs some tests on the CPU to find the optimal number of threads to use for the matrix calculations.

The matrix library contains many features, such as:
1. Elementwise addition, subtraction, multiplication and division with another matrix
2. Addition, subtraction, multiplication and division by a scalar
3. Matrix transpose,
4. Matrix product,
5. Filling a matrix with ascending values, descending values, random values or a single value
6. Getting or setting values
7. Creating a matrix from data
8. Mapping with sigmoid, tanh, ReLU and leaky ReLU, as well as their derivatives
9. Getting the matrix as a Python list
10. Supports pickling with the standard library ```pickle``` module
11. Formatting and printing a matrix
### Progress Bars
Wrap a progress bar around any Python iterator and have a progress bar generated automatically. The progress bar will adjust its width to the width of the console, shows the current percentage, time and time remaining, as well as the number of iterations per second the bar is running at. The rate at which the bar updates also adjusts dynamically, ensuring that it has a minimal impact on the speed of the loop, while updating frequently enough to provide relevant information.

\* Due to Clang on Mac OS the wheels do not support OpenMP, meaning some matrix operations may be slower than on other operating systems.
### Neural Networks
Create, train and evaluate a neural network in only a few lines of code, customising the size of the network, its learning rate, it's activation functions (which can be customised on a per-layer basis) and the metrics which it logs. The network library also uses the efficient Matrix library, meaning it can train a simple network in under a second. You can also plot a graph of any metrics that are being logged, making it easy to evaluate the progress of the network.
3 changes: 1 addition & 2 deletions build/lib.win-amd64-3.8/libpymath/__init__.py
Expand Up @@ -19,6 +19,5 @@
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""

from . import matrix
from . import progress
from . import matrix, progress, network
import libpymath.error
Binary file not shown.
6 changes: 6 additions & 0 deletions build/lib.win-amd64-3.8/libpymath/matrix/matrix.py
Expand Up @@ -346,6 +346,12 @@ def __matmul__(self, other):

return self.dot(other)

def sum(self):
return self.matrix.matrixSum()

def mean(self):
return self.matrix.matrixMean()

def __add__(self, other):
"""
Add a matrix to another matrix elementwise, or add a scalar to every value
Expand Down
1 change: 1 addition & 0 deletions build/lib.win-amd64-3.8/libpymath/network/__init__.py
@@ -0,0 +1 @@
from .network import *
254 changes: 254 additions & 0 deletions build/lib.win-amd64-3.8/libpymath/network/network.py
@@ -0,0 +1,254 @@
import libpymath as lpm
import random


# Matrix map options (shift 5 left for corresponding derivative)
SIGMOID = 1 << 5
TANH = 1 << 6
RELU = 1 << 7
LEAKY_RELU = 1 << 8

# Matrix map derivative options (shift 5 right for corresponding activation)
D_SIGMOID = 1 << 10
D_TANH = 1 << 11
D_RELU = 1 << 12
D_LEAKY_RELU = 1 << 13


class Network:
def __init__(self, **kwargs):
if "layers" in kwargs:
if isinstance(kwargs["layers"], (list, tuple)):
self._nodeCounts = kwargs["layers"]

# Check everything is an integer
for n in self._nodeCounts:
if not isinstance(n, int):
raise TypeError("\"layers\" must be a list or tuple of integers. Found {}", type(n))
else:
raise TypeError("\"layers\" must be defined as a list or tuple of integers")
else:
raise TypeError("Missing required argument \"layers\"")

self._activations = kwargs["activations"] if "activations" in kwargs else [lpm.matrix.RELU for _ in range(len(self._nodeCounts) - 1)]

# Check activations
for a in self._activations:
if a not in [SIGMOID, TANH, RELU, LEAKY_RELU, lpm.matrix.SIGMOID, lpm.matrix.TANH, lpm.matrix.RELU, lpm.matrix.LEAKY_RELU]:
raise NotImplementedError("The activation function {} is not implemented. If you would like this to be added please raise an issue on GitHub: https://github.com/Pencilcaseman/LibPyMath/issues".format(a))

self._layers = []
self._biases = []

if "lr" in kwargs:
if isinstance(kwargs["lr"], (int, float)):
self._learningRate = float(kwargs["lr"])
else:
raise TypeError("\"lr\" must be an int or a float")
else:
self._learningRate = 0.025

for i in range(len(self._nodeCounts) - 1):
self._layers.append(lpm.matrix.Matrix(self._nodeCounts[i + 1], self._nodeCounts[i]))
self._biases.append(lpm.matrix.Matrix(self._nodeCounts[i + 1]))

self._layers[-1].fillRandom()
self._biases[-1].fillRandom()

self._metrics = {
"loss": [[], False]
}

self._metricMod = {
"loss": 1
}

self.backpropagateIndex = 0

@property
def layers(self):
return self._layers.copy()

@property
def biases(self):
return self._biases.copy()

@property
def learningRate(self):
return self._learningRate

@property
def shape(self):
return self._nodeCounts.copy()

def parseData(self, dataIn, dataOut=None):
res = []

if isinstance(dataIn[0], (list, tuple)):
# Check for a single list containing input + output
if dataOut is None:
for element in dataIn:
if len(element) == self._nodeCounts[0] + self._nodeCounts[-1]:
# Contains input and output data in a single list
res.append((self.__parseData(element[:self._nodeCounts[0]]), self.__parseData(element[self._nodeCounts[0]:], -1)))
else:
raise ValueError("Invalid input data. Expected input data of length {} but got length {}".format(self._nodeCounts[0] + self._nodeCounts[-1], len(element)))
else:
if len(dataIn) != len(dataOut):
raise ValueError("Input and output data lengths must be equal")

for a, b in zip(dataIn, dataOut):
res.append((self.__parseData(a), self.__parseData(b, -1)))
else:
raise NotImplementedError("Cannot yet parse 1 dimensional training data")

return res

def __parseData(self, data, layer=0):
# Check that data is valid and return a valid matrix if possible, else raise an error
if isinstance(data, (int, float)):
if self._nodeCounts[0] == 1:
# Single bit of data so create [[x]]
return lpm.matrix.Matrix(data=[[data]]).T
else:
raise ValueError("Only one value was passed, though input requires {}".format(self._nodeCounts[0]))
elif isinstance(data, (lpm.matrix.Matrix, list, tuple)):
# Check if data will fit currently
if isinstance(data, lpm.matrix.Matrix):
tmp = data
else:
tmp = lpm.matrix.Matrix(data=data)

if tmp.rows == self._nodeCounts[layer] and tmp.cols == 1:
return tmp
elif tmp.cols == self._nodeCounts[layer] and tmp.rows == 1:
return tmp.T
else:
if tmp.rows == 1:
val = tmp.cols
elif tmp.cols == 1:
val = tmp.rows
else:
val = None

raise ValueError("Inputted data is invalid for network of this size. Input requires {} values, received {}".format(self._nodeCounts[0], val if val is not None else "{}x{}".format(tmp.rows, tmp.cols)))
else:
raise TypeError("Invalid type for neural network. Requires list, tuple{}".format(", int or float" if self._nodeCounts[0] == 1 else ""))

def feedForward(self, data, **kwargs):
# For improved speed when one is sure that the data is correct
if "noCheck" in kwargs:
current = data.copy()
else:
current = self.__parseData(data)

for i in range(len(self._nodeCounts) - 1):
current = self._layers[i] @ current
current += self._biases[i]
current.map(self._activations[i])

return current

def backpropagate(self, inputData, targetData, **kwargs):
# For improved speed when one is sure that the data is correct
if "noCheck" in kwargs:
inputs = inputData.copy()
targets = targetData.copy()
else:
inputs = self.__parseData(inputData)
targets = self.__parseData(targetData, layer=-1)

layerData = []
errors = [None for _ in range(len(self._nodeCounts) - 1)]

current = inputs.copy()
for i in range(len(self._nodeCounts) - 1):
current = self._layers[i] @ current
current += self._biases[i]
current.map(self._activations[i])

layerData.append(current.copy())

errors[-1] = targets - layerData[-1]

if self._metrics["loss"][1] and self.backpropagateIndex % self._metricMod["loss"] == 0:
self._metrics["loss"][0].append(errors[-1].mean() ** 2)

self.backpropagateIndex += 1

for i in range(len(self._nodeCounts) - 2, -1, -1):
gradient = layerData[i].mapped(self._activations[i] << 5)
gradient *= errors[i]
gradient *= self._learningRate

if i > 0:
transposed = layerData[i - 1].T
else:
transposed = inputs.T

weight_deltas = gradient @ transposed

self._layers[i] += weight_deltas
self._biases[i] += gradient

if i > 0:
layerT = self._layers[i].T
errors[i - 1] = layerT @ errors[i]

def log(self, metric, mod=1):
if metric == "loss" and self._metrics["loss"] != [None]:
self._metrics["loss"][1] = True
self._metricMod["loss"] = mod

def metrics(self, metric):
if metric not in self._metrics:
raise KeyError("Metric {} does not exist".format(metric))
if not self._metrics[metric][1]:
raise KeyError("Metric {} is not being logged".format(metric))

return self._metrics[metric][0]

def plotMetric(self, metric):
if metric not in self._metrics:
raise KeyError("Metric {} does not exist".format(metric))
if not self._metrics[metric][1]:
raise KeyError("Metric {} is not being logged".format(metric))

x = [i * self._metricMod[metric] for i in range(len(self.metrics(metric)))]
y = self._metrics[metric][0]

try:
import matplotlib.pyplot as plt

fig, axs = plt.figure(), plt.axes()
fig.canvas.set_window_title("Libpymath Network {}".format(metric.title()))

axs.plot(x, y)
axs.set_title("{} vs Epoch".format(metric.title()))
axs.set_xlabel("Epoch")
axs.set_ylabel("{}".format(metric.title()))

plt.show()
except ImportError:
raise ModuleNotFoundError("The module matplotlib.pyplot was not found, please install this via pip")

def metricData(self, metric):
if metric not in self._metrics:
raise KeyError("Metric {} does not exist".format(metric))
if not self._metrics[metric][1]:
raise KeyError("Metric {} is not being logged".format(metric))

return [i * self._metricMod[metric] for i in range(len(self.metrics(metric)))], self._metrics[metric][0]

def fit(self, inputData, targetData=None, epochs=500, **kwargs):
data = self.parseData(inputData, targetData)
samples = len(data)

if "progress" in kwargs and kwargs["progress"] not in [0, False]:
iterator = lpm.progress.Progress(range(epochs))
else:
iterator = range(epochs)

for _ in iterator:
pos = random.randint(0, samples - 1)
self.backpropagate(data[pos][0], data[pos][1], noCheck=True)
Binary file modified build/temp.win-amd64-3.8/Release/libpymath/src/matrix/matrix.obj
Binary file not shown.
Binary file modified dist/libpymath-0.5.7.tar.gz
Binary file not shown.
Binary file added dist/libpymath-0.6.0-cp38-cp38-win_amd64.whl
Binary file not shown.
Binary file added dist/libpymath-0.6.0.tar.gz
Binary file not shown.
Binary file modified docs/build/doctrees/environment.pickle
Binary file not shown.
Binary file modified docs/build/doctrees/libpymath.matrix.doctree
Binary file not shown.
2 changes: 1 addition & 1 deletion docs/build/html/.buildinfo
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 267665a288d6aeb494662dd9a6566510
config: 183218795b956c8dcc690089c2bd69c2
tags: 645f666f9bcd5a90fca523b33c5a78b7
2 changes: 1 addition & 1 deletion docs/build/html/_static/documentation_options.js
@@ -1,6 +1,6 @@
var DOCUMENTATION_OPTIONS = {
URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'),
VERSION: '0.5.0',
VERSION: '0.6.0',
LANGUAGE: 'None',
COLLAPSE_INDEX: false,
BUILDER: 'html',
Expand Down
8 changes: 7 additions & 1 deletion docs/build/html/genindex.html
Expand Up @@ -7,7 +7,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Index &mdash; libpymath 0.5.0 documentation</title>
<title>Index &mdash; libpymath 0.6.0 documentation</title>



Expand Down Expand Up @@ -288,6 +288,8 @@ <h2 id="M">M</h2>
<li><a href="libpymath.matrix.html#libpymath.matrix.matrix.Matrix.mapped">mapped() (libpymath.matrix.matrix.Matrix method)</a>
</li>
<li><a href="libpymath.matrix.html#libpymath.matrix.matrix.Matrix">Matrix (class in libpymath.matrix.matrix)</a>
</li>
<li><a href="libpymath.matrix.html#libpymath.matrix.matrix.Matrix.mean">mean() (libpymath.matrix.matrix.Matrix method)</a>
</li>
<li>
module
Expand Down Expand Up @@ -341,6 +343,10 @@ <h2 id="S">S</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="libpymath.matrix.html#libpymath.matrix.matrix.Matrix.shape">shape() (libpymath.matrix.matrix.Matrix property)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="libpymath.matrix.html#libpymath.matrix.matrix.Matrix.sum">sum() (libpymath.matrix.matrix.Matrix method)</a>
</li>
</ul></td>
</tr></table>
Expand Down
2 changes: 1 addition & 1 deletion docs/build/html/index.html
Expand Up @@ -7,7 +7,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Libpymath Documentation &mdash; libpymath 0.5.0 documentation</title>
<title>Libpymath Documentation &mdash; libpymath 0.6.0 documentation</title>



Expand Down
2 changes: 1 addition & 1 deletion docs/build/html/libpymath.html
Expand Up @@ -7,7 +7,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>libpymath package &mdash; libpymath 0.5.0 documentation</title>
<title>libpymath package &mdash; libpymath 0.6.0 documentation</title>



Expand Down

0 comments on commit 62409d5

Please sign in to comment.