This Neural Network library utilizes 'Automatic Differentiation' (or 'Autograd') to compute gradients of the loss function with respect to the model's parameters. The library is designed to be simple and easy to understand. This library is made for learning purposes.
Look into tensor.py for the autograd system.
The main components of the library are:
- Tensor: A multi-dimensional array with support for autograd.
- Layers: A collection of layers that can be stacked together to form a neural network.
- Activations: A collection of activation functions.
- Optimizers: A collection of optimization algorithms.
- Losses: A collection of loss functions that can be used to evaluate the performance.
- Utils: Contains utility functions like DataLoader, save_object, load_object, etc.
- Trainer: A class used to train a neural network. Or you can write a custom training loop.
from nn.layers import Sequential, Dense, Dropout
from nn.activations import ReLu, Softmax
from nn.optim import Adam
from nn.losses import CategoricalCrossEntropy as CCE
from nn.utils import DataLoader, save_object, load_object
from nn.trainer import Trainer
clf = Sequential([
Dense(28*28, 128),
ReLu(),
Dropout(0.3),
Dense(128, 64),
ReLu(),
Dropout(0.3),
Dense(64, 10),
Softmax()
])
train_loader = DataLoader(X_train, y_train, batch_size=128, shuffle=True, autograd=True)
val_loader = DataLoader(X_val, y_val, batch_size=128, shuffle=True, autograd=False)
trainer = Trainer(clf, CCE(), Adam(clf.get_parameters(), lr=0.01))
history = trainer.train(train_loader, val_loader, epochs=10)