Nexdl (Next Deep Learning) is a minimal, hackable, and educational autograd engine and neural network library for Python. It is designed to be a transparent and easily understandable implementation of modern deep learning concepts, heavily inspired by PyTorch.
The core philosophy of Nexdl is simplicity and transparency.
- Pure Python/NumPy: The entire core logic is written in high-level Python using NumPy for backend operations. This makes the code easy to read, debug, and modify.
- PyTorch-like API: If you know PyTorch, you already know Nexdl. We aim to keep the API surface as close to PyTorch as possible for familiar usage.
- Hackable: Nexdl is built for research and education. Want to implement a custom autograd function? It's just a few lines of Python. Need to see how backpropagation works? Just read the
tensor.pyfile.
- Automatic Differentiation (Autograd): Full reverse-mode customization.
- Dynamic Computational Graph: Define-by-Run execution.
- Neural Network Layers:
usenexdl.nnfor standard layers likeLinear,Conv2d(coming soon),RNN`, etc. - Optimizers:
SGD,Adam,AdamW. - Extensible: Easily add new operations by subclassing
Function.
You can install Nexdl directly from the source.
git clone https://github.com/yourusername/Nexdl.git
cd Nexdl
pip install -e .Prerequisites:
- Python 3.7+
- NumPy
The core of Nexdl is the Tensor object, which tracks operations for automatic differentiation.
import nexdl as nx
# Create tensors
x = nx.tensor([1.0, 2.0, 3.0], requires_grad=True)
y = nx.tensor([4.0, 5.0, 6.0], requires_grad=True)
# Perform operations
z = x * y + x.sum()
# Compute gradients
z.sum().backward()
print(f"x.grad: {x.grad}")
print(f"y.grad: {y.grad}")Nexdl provides a Module class to organize your neural networks, just like PyTorch.
import nexdl as nx
import nexdl.nn as nn
import nexdl.optim as optim
class SimpleNet(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10, 20)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(20, 1)
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
return x
# Initialize model and optimizer
model = SimpleNet()
optimizer = optim.SGD(model.parameters(), lr=0.01)
# Dummy Input
input_data = nx.randn(5, 10)
target = nx.randn(5, 1)
# Training Step
optimizer.zero_grad()
output = model(input_data)
loss = ((output - target) ** 2).mean() # MSE Loss
loss.backward()
optimizer.step()
print(f"Loss: {loss.item()}")The Tensor class is the main data structure. It wraps a NumPy array and adds autograd capabilities.
requires_grad=True: Tracks operations on this tensor.backward(): Computes gradients for all tensors in the computational graph that haverequires_grad=True.
Every operation (add, sub, mul, etc.) is implemented as a subclass of Function.
forward(ctx, *args): Computes the output.backward(ctx, grad_output): Computes the gradients for the inputs.
The nn.Module class is the base class for all neural network modules.
- Automatically tracks
Parameters. - Supports
state_dict()for saving/loading models. - Handles
train()andeval()modes.
Nexdl is an open project for learning and experimentation. Pull requests are welcome!
- Fork the repository.
- Create your feature branch (
git checkout -b feature/amazing-feature). - Commit your changes (
git commit -m 'Add some amazing feature'). - Push to the branch (
git push origin feature/amazing-feature). - Open a Pull Request.
MIT