Skip to content

gamal1osama/micrograd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Micrograd

A tiny scalar-valued automatic differentiation engine with a small neural network library built on top of it.

Overview

Micrograd demonstrates the core concepts of automatic differentiation and neural networks by building everything from scratch using only Python's standard library (plus PyTorch for testing). The implementation focuses on educational clarity rather than performance.

Features

  • Automatic Differentiation: Scalar-valued autograd engine with dynamic computation graph
  • Neural Network Building Blocks: Neurons, layers, and multi-layer perceptrons (MLPs)
  • Common Operations: Addition, multiplication, power, ReLU activation, exp, log, and more
  • Backpropagation: Automatic gradient computation through the computational graph
  • PyTorch Compatibility Testing: Verify correctness against PyTorch's autograd

File Structure

micrograd/
├── core.py      # Value class with automatic differentiation
├── total_nn.py  # Neural network components (Neuron, Layer, MLP)
├── test.py      # Test suite comparing against PyTorch
└── README.md    # This file

Core Components

Value Class (core.py)

The heart of the project - a scalar value wrapper that tracks gradients:

from core import Value

# Create values and build computation graph
a = Value(2.0)
b = Value(3.0)
c = a * b + a.exp()
c.backward()  # Compute gradients

print(f"c = {c.data}")      # Forward pass result
print(f"dc/da = {a.grad}")  # Gradient of c with respect to a

Supported Operations:

  • Arithmetic: +, -, *, /, **
  • Activation: relu()
  • Mathematical: exp(), log()
  • Automatic gradient computation via backward()

Neural Networks (total_nn.py)

Simple neural network components built on top of the Value class:

from total_nn import MLP

# Create a 3-layer MLP: 3 inputs → 4 neurons → 4 neurons → 1 output
model = MLP(3, [4, 4, 1])

# Forward pass
x = [2.0, 3.0, -1.0]
pred = model(x)
print(f"Prediction: {pred.data}")

# Get all parameters for training
params = model.parameters()
print(f"Total parameters: {len(params)}")

Testing (test.py)

Comprehensive tests that verify the implementation against PyTorch:

python test.py

The tests cover:

  • Basic operations and their gradients
  • Complex computational graphs
  • Numerical gradient verification against PyTorch

Example Usage

Here's a simple example showing automatic differentiation in action:

from core import Value

# Build a computation graph
x = Value(1.0)
y = Value(2.0)
z = x * y + y.relu()  # z = 1*2 + relu(2) = 4

# Compute gradients
z.backward()

print(f"z = {z.data}")      # Output: 4.0
print(f"dz/dx = {x.grad}")  # Gradient of z w.r.t. x
print(f"dz/dy = {y.grad}")  # Gradient of z w.r.t. y

Learning Objectives

This project demonstrates:

  1. Automatic Differentiation: How computational graphs enable automatic gradient computation
  2. Backpropagation: The chain rule applied systematically to neural networks
  3. Neural Network Fundamentals: Building blocks from neurons to multi-layer networks
  4. Gradient-Based Optimization: Foundation for training neural networks

Requirements

  • Python 3.6+
  • PyTorch (for testing only)

Installation & Running

  1. Clone the repository:
git clone https://github.com/gamal1osama/micrograd.git
cd micrograd
  1. Run the tests:
python test.py
  1. Experiment with the code:
python3
>>> from core import Value
>>> from total_nn import MLP
>>> # Start experimenting!

Inspiration

This project is based on YouTube tutorial .

About

Micrograd is a small automatic differentiation engine and neural network library built with Python, designed for educational purposes. It demonstrates core concepts like automatic differentiation, backpropagation, and neural network building blocks, with testing against PyTorch.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages