Autograd engine and tiny neural network library implemented from scratch, to understand how backpropagation and gradient-based learning work under the hood.
The core of the project is a scalar Value type that:
- stores a data value and its gradient,
- records the operation that produced it,
- builds a computation graph dynamically,
- supports reverse-mode automatic differentiation via
.backward().
On top of this, a small MLP (multi-layer perceptron) is implemented using only these Value objects.
- Minimal scalar autograd engine (
Value):- Supports
+,-,*,/, power,tanh,exp - Builds a computation graph with parent links
- Reverse-mode autodiff with topological sorting
- Supports
- Tiny neural network library:
Neuron,Layer,MultiLayerPerceptron- Tanh activations
- Parameter collection via
.parameters()
- Simple training loop helper:
- Mean squared error loss:
loss_mean_square_error - Accuracy metric for binary classification
- Mean squared error loss:
- Jupyter notebooks to explore:
- Computation graphs
- Decision boundaries on a 2D "half-moon" dataset
- Comparison against PyTorch.
Install from requirements.txt:
pip install -r requirements.txtAll of the main logic lives in model/__init__.py.
Scalar value with automatic differentiation support.
from model import Value
a = Value(2.0, label="a")
b = Value(-3.0, label="b")
c = a * b + 5.0
c.backward()
print(c.data) # forward value
print(a.grad) # dc/da
print(b.grad) # dc/dbSupported operations (with gradients implemented):
+,-, unary-,*,/**(power with scalar exponent).tanh().exp().backward()to run reverse-mode autodiff on the scalar output.
Simple MLP built entirely from Value objects.
from model import Neuron, Layer, MultiLayerPerceptron
# 2-dim input, hidden layers [16, 16], 1-dim output
mlp = MultiLayerPerceptron(nin=2, nouts=[16, 16, 1])
# Forward pass on a single input
x1 = [Value(0.5), Value(-1.2)]
out = mlp(x1) # Value
print(out.data)You can collect all trainable parameters via:
params = mlp.parameters().
├── main.ipynb
├── model
│ ├── __init__.py
├── requirements.txt
├── use.ipynb
├── use-pytorch.ipynb
└── varying-neurons-layers.ipynb
Some useful files:
-
main.ipynb: Main notebook walking through the autograd engine, network, and training.
-
use.ipynb: Uses the custom MLP to classify a 2D half-moon distribution.
-
use-pytorch.ipynb: Reimplements the same task using PyTorch layers for comparison.
varying-neurons-layers.ipynb: Experiments with different depths/widths of the MLP.
Contributions are welcome! Feel free to open issues or submit pull requests.
This project is licensed under the MIT License. See the LICENSE file for details.
If you have any questions or feedback, feel free to connect with me on LinkedIn at linkedin.com/in/daniel-di-giovanni/ or send me an email at dannyjdigio@gmail.com.