A tiny scalar-valued automatic differentiation engine with a small neural network library built on top of it.
Micrograd demonstrates the core concepts of automatic differentiation and neural networks by building everything from scratch using only Python's standard library (plus PyTorch for testing). The implementation focuses on educational clarity rather than performance.
- Automatic Differentiation: Scalar-valued autograd engine with dynamic computation graph
- Neural Network Building Blocks: Neurons, layers, and multi-layer perceptrons (MLPs)
- Common Operations: Addition, multiplication, power, ReLU activation, exp, log, and more
- Backpropagation: Automatic gradient computation through the computational graph
- PyTorch Compatibility Testing: Verify correctness against PyTorch's autograd
micrograd/
├── core.py # Value class with automatic differentiation
├── total_nn.py # Neural network components (Neuron, Layer, MLP)
├── test.py # Test suite comparing against PyTorch
└── README.md # This file
The heart of the project - a scalar value wrapper that tracks gradients:
from core import Value
# Create values and build computation graph
a = Value(2.0)
b = Value(3.0)
c = a * b + a.exp()
c.backward() # Compute gradients
print(f"c = {c.data}") # Forward pass result
print(f"dc/da = {a.grad}") # Gradient of c with respect to aSupported Operations:
- Arithmetic:
+,-,*,/,** - Activation:
relu() - Mathematical:
exp(),log() - Automatic gradient computation via
backward()
Simple neural network components built on top of the Value class:
from total_nn import MLP
# Create a 3-layer MLP: 3 inputs → 4 neurons → 4 neurons → 1 output
model = MLP(3, [4, 4, 1])
# Forward pass
x = [2.0, 3.0, -1.0]
pred = model(x)
print(f"Prediction: {pred.data}")
# Get all parameters for training
params = model.parameters()
print(f"Total parameters: {len(params)}")Comprehensive tests that verify the implementation against PyTorch:
python test.pyThe tests cover:
- Basic operations and their gradients
- Complex computational graphs
- Numerical gradient verification against PyTorch
Here's a simple example showing automatic differentiation in action:
from core import Value
# Build a computation graph
x = Value(1.0)
y = Value(2.0)
z = x * y + y.relu() # z = 1*2 + relu(2) = 4
# Compute gradients
z.backward()
print(f"z = {z.data}") # Output: 4.0
print(f"dz/dx = {x.grad}") # Gradient of z w.r.t. x
print(f"dz/dy = {y.grad}") # Gradient of z w.r.t. yThis project demonstrates:
- Automatic Differentiation: How computational graphs enable automatic gradient computation
- Backpropagation: The chain rule applied systematically to neural networks
- Neural Network Fundamentals: Building blocks from neurons to multi-layer networks
- Gradient-Based Optimization: Foundation for training neural networks
- Python 3.6+
- PyTorch (for testing only)
- Clone the repository:
git clone https://github.com/gamal1osama/micrograd.git
cd micrograd- Run the tests:
python test.py- Experiment with the code:
python3
>>> from core import Value
>>> from total_nn import MLP
>>> # Start experimenting!This project is based on YouTube tutorial .