GradFlow is a Python library that implements automatic differentiation from scratch. It provides the core building blocks for constructing and training neural networks.
- Automatic Differentiation:
- Implements forward and backward pass automatic differentiation using a custom
GradNode
class. - Supports common arithmetic operations, activation functions (ReLU, sigmoid, tanh, etc.), and custom operations.
- Implements forward and backward pass automatic differentiation using a custom
- Neural Network Framework:
- Provides classes for building neural network components:
Neuron
: Represents a single neuron with weights and biases.Layer
: Represents a layer of neurons.MLP
: Represents a Multi-Layer Perceptron (MLP) network.
- Enables the construction and training of various neural network architectures.
- Provides classes for building neural network components:
- Computational Graph Visualization:
- Includes functionality to visualize the computational graph of the neural network, aiding in understanding the flow of data and gradients.
-
Define the Computational Graph:
- Create
GradNode
objects to represent variables, constants, and operations. - Connect
GradNode
objects to form the computational graph.
- Create
-
Forward Pass:
- Perform the forward pass through the graph by accessing the
data
property of the final node.
- Perform the forward pass through the graph by accessing the
-
Backward Pass:
- Compute gradients for all nodes in the graph by calling the
backward
method of the final node.
- Compute gradients for all nodes in the graph by calling the
-
Update Parameters:
- Use the computed gradients to update the weights and biases of your neural network using an optimization algorithm (e.g., gradient descent).
- Magic Methods: These are special methods in Python (also known as dunder methods) that helps us customize the behavior of built-in operators.
- For example,
__add__()
is called when the+
operator is used with an object. - In
gradFlow
, magic methods are used to overload operators like+
,-
,*
,/
, etc., forGradNode
objects, ensuring that the forward and backward passes are correctly handled for these operations.
- For example,
- Single Leading Underscore (_): Indicates that a parameter or variable is intended for internal use only and should not be accessed directly from outside the class or module.
- Trailing Underscore (var_): Used to avoid conflicts with Python keywords.
- Forward Pass: In the forward pass, the initial values of all nodes are set, and the output of the graph is computed.
-
Backward Pass:
- Starts from the final node of the graph.
- The gradient of the final node with respect to itself is initialized to 1.
- At each node, the gradient is calculated using the chain rule.
- For example, if node
e
is the result ofc + b
, thenc.grad()
is calculated ase.grad() * 1.0
. - If node
e
is the result ofc * b
, thenc.grad()
is calculated ase.grad() * b
. - So, essentially for a child.gradient() = parent.gradient() *
$\frac{d}{dchild}parent$ .
- For example, if node
- The
_backward()
function within eachGradNode
stores the logic to compute the gradients of its children.
# Clone the repository
git clone https://github.com/anantmehta33/GradFlow.git
# Install dependencies (assuming you have pip installed)
pip install -r requirements.txt
We welcome contributions from the community. Here's how you can contribute:
- Fork the repository: Create your own copy of the project on GitHub.
- Make changes: Implement new features, fix bugs, or improve existing code.
- Create a pull request: Submit a pull request to the main repository with your changes.
- Ajay Jagannath
- Anant Mehta
The work of Andrej Karpathy inspired this project.
MIT License