Skip to content

sakshamp00/micrograd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Micrograd From Scratch (Neural Network + Autograd Engine)

This project is a from-scratch reimplementation of a tiny deep learning framework inspired by PyTorch and Karpathy’s micrograd.

It includes:

  • A scalar-based automatic differentiation engine
  • A neural network library (Neuron, Layer, MLP)
  • An SGD optimizer
  • An MSE loss
  • A full training loop that learns XOR

πŸ“’ Table of Contents


πŸ“¦ Installation

  1. Clone the repo:

    git clone https://github.com/sakshamp00/micrograd.git
    cd micrograd
  2. Install dependencies (optional, mainly for plotting):

    pip install matplotlib

πŸ› οΈ Usage

Just run the training script:

python train_xor.py

You’ll see:

  • Loss decreasing over training steps
  • Final predictions on the XOR dataset
  • Optional loss plot (if matplotlib is installed)

πŸ“ Project Structure

β”œβ”€β”€ micrograd/              # Python module
β”‚   β”œβ”€β”€ __init__.py         # Marks as a Python package
β”‚   β”œβ”€β”€ engine.py           # Core autograd Value class
β”‚   β”œβ”€β”€ nn.py               # Neuron, Layer, MLP classes
β”‚   β”œβ”€β”€ optim.py            # SGD optimizer (and later Adam)
β”‚   └── loss.py             # Loss functions (MSE)
β”œβ”€β”€ train_xor.py            # Script to train XOR dataset
β”œβ”€β”€ README.md               # This file

πŸ“Š Training Example (XOR)

XOR dataset:

0 βŠ• 0 β†’ 0  
0 βŠ• 1 β†’ 1  
1 βŠ• 0 β†’ 1  
1 βŠ• 1 β†’ 0

The training script:

  • Builds the MLP
  • Loops forward β†’ backprop β†’ update
  • Prints loss and final accuracy

Example output after training:

step 0, loss = 2.31  
step 100, loss = 0.21 
...
step 900, loss = 0.02  

Trained model predictions:
Input: [0.0, 0.0], Predicted: 0.0111, True: 0.0
Input: [0.0, 1.0], Predicted: 0.9785, True: 1.0
Input: [1.0, 0.0], Predicted: 0.9831, True: 1.0
Input: [1.0, 1.0], Predicted: 0.0142, True: 0.0

🀝 Contributing

Contributions are welcome! Whether it’s improving the documentation, adding features like:

  • Activation functions (ReLU, Sigmoid)
  • Optimizers (Adam)
  • Batch support
  • More demos (MNIST, regression)

Feel free to open issues or pull requests πŸŽ‰

About

A tiny Python deep learning framework with autograd and MLP, built from scratch.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages