By Rae Chipera | Jaxorik AI Research Group
A neural network framework built from scratch in pure NumPy, modeled after PyTorch's API. PyRae is designed for anyone who wants full visibility into every line of a forward pass, backward pass, and weight update — no black boxes.
Originally developed as part of Carnegie Mellon University's Deep Learning bootcamp curriculum.
| Class | Description |
|---|---|
Linear |
Fully connected layer with Xavier initialization |
ReLU |
ReLU activation function |
Sequential |
Chains layers into a feed-forward network |
CrossEntropyLoss |
Cross-entropy loss with numerical stability (LogSumExp) |
| Class | Description |
|---|---|
SGD |
Stochastic Gradient Descent with optional momentum |
Adam |
Adam optimizer with bias correction |
from PyRae import nn, optim
# Build a model
model = nn.Sequential(
nn.Linear(3, 4),
nn.ReLU(),
nn.Linear(4, 2)
)
# Define loss and optimizer
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.SGD(model, lr=0.01, momentum=0.9)
# Training loop
for x_batch, y_batch in dataloader:
optimizer.zero_grad()
out = model.forward(x_batch)
loss = loss_fn.forward(out, y_batch)
model.backward(loss_fn)
optimizer.step()PyTorch and TensorFlow are great — until you need to understand or control exactly what's happening under the hood. PyRae is useful for:
- Learning how backpropagation actually works
- Research requiring custom gradient logic
- Teaching deep learning fundamentals without framework magic
- Python 3.x
- NumPy
MIT © Rae Chipera / Jaxorik AI Research Group