This project benchmarks a custom implementation of the Adam optimizer against PyTorch's built-in Adam optimizer using the MNIST dataset. The comparison is visualized based on model testing accuracy over training epochs.
- Custom implementation of the Adam optimization algorithm
- Training on the MNIST handwritten digits dataset
- Comparison with PyTorch's built-in Adam
- Visualization of model accuracy over time
- Lightweight, pure PyTorch and NumPy based training loop
.
βββ main.py # Main script for training and comparison
βββ adam.png # Output accuracy plot (auto-generated)
βββ requirements.txt # Python dependencies
βββ README.md # Project documentation
Input: 28 x 28 (flattened)
Dropout(0.4)
Linear: 784 -> 1200
Dropout(0.4)
Linear: 1200 -> 10
LogSoftmax
-
PyTorch's
torch.optim.Adam -
Custom Adam class:
-
Manually updates weights using:
- First and second moment estimates
- Bias correction
- Learning rate decay
-
Fully vectorized using PyTorch
-
A plot (adam.png) comparing testing accuracy of both optimizers over training epochs (every 100 epochs).
git clone https://github.com/happybear-21/adam.py
cd adam.pypython -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windowspip install -r requirements.txtpython main.pyThis will:
- Train two models using both optimizers
- Plot their testing accuracy during training
- Save the results to
adam.png
