This project is a from-scratch reimplementation of a tiny deep learning framework inspired by PyTorch and Karpathyβs micrograd.
It includes:
- A scalar-based automatic differentiation engine
- A neural network library (Neuron, Layer, MLP)
- An SGD optimizer
- An MSE loss
- A full training loop that learns XOR
-
Clone the repo:
git clone https://github.com/sakshamp00/micrograd.git cd micrograd -
Install dependencies (optional, mainly for plotting):
pip install matplotlib
Just run the training script:
python train_xor.py
Youβll see:
- Loss decreasing over training steps
- Final predictions on the XOR dataset
- Optional loss plot (if matplotlib is installed)
βββ micrograd/ # Python module
β βββ __init__.py # Marks as a Python package
β βββ engine.py # Core autograd Value class
β βββ nn.py # Neuron, Layer, MLP classes
β βββ optim.py # SGD optimizer (and later Adam)
β βββ loss.py # Loss functions (MSE)
βββ train_xor.py # Script to train XOR dataset
βββ README.md # This file
XOR dataset:
0 β 0 β 0
0 β 1 β 1
1 β 0 β 1
1 β 1 β 0
The training script:
- Builds the MLP
- Loops forward β backprop β update
- Prints loss and final accuracy
Example output after training:
step 0, loss = 2.31
step 100, loss = 0.21
...
step 900, loss = 0.02
Trained model predictions:
Input: [0.0, 0.0], Predicted: 0.0111, True: 0.0
Input: [0.0, 1.0], Predicted: 0.9785, True: 1.0
Input: [1.0, 0.0], Predicted: 0.9831, True: 1.0
Input: [1.0, 1.0], Predicted: 0.0142, True: 0.0
Contributions are welcome! Whether itβs improving the documentation, adding features like:
- Activation functions (ReLU, Sigmoid)
- Optimizers (Adam)
- Batch support
- More demos (MNIST, regression)
Feel free to open issues or pull requests π