This project demonstrates how to build, train, evaluate, and deploy a neural network classifier using PyTorch from scratch.
The goal was to understand the full deep learning workflow rather than using high-level frameworks.
The project includes:
- Model construction using
nn.Module - Autograd and backpropagation
- Training loop with optimizer and loss
- Validation evaluation
- Model saving and inference
- Batch prediction
- Confidence calibration
- Histogram visualization
This is intended as a foundational deep learning portfolio project.
- Python
- PyTorch
- NumPy
- Matplotlib
- Pandas
- Logging
A simple multilayer perceptron (MLP):
Input → Linear → ReLU → Linear → ReLU → Output (Logits)
Example default configuration:
- Input: 2 features
- Hidden Layer 1: 64 neurons
- Hidden Layer 2: 32 neurons
- Output: 2 classes
The training process follows the standard deep learning loop:
- Forward pass
- Loss calculation
- Backpropagation
- Weight update
- Accuracy tracking
- Validation evaluation
Loss Function:
- CrossEntropyLoss()
Optimizer:
- Adam
Formal evaluation is performed on a held-out validation dataset.
Metrics computed:
- Validation loss
- Validation accuracy
Model is switched to evaluation mode using:
model.eval()
Gradient calculation is disabled during evaluation:
torch.no_grad()
The project supports batch inference on new data.
Features:
- Batch input loading
- Softmax probability output
- Class prediction using argmax
Example output per batch:
Predicted Class: 0
Probabilities: [0.93, 0.07]
The project includes confidence calibration analysis.
Steps performed:
- Softmax probability extraction
- Confidence histogram generation
- Prediction confidence distribution analysis
This helps answer:
"How confident is the model in its predictions?"
The project includes:
- Test Accuracy vs Epochs
- Test Accuracy Vs. Learning Rate
- Confidence distribution histogram
These plots help interpret:
- Model learning behaviour
- Confidence reliability
The trained model is saved using:
torch.save(model.state_dict(), "tiny_mlp.pth")
To reload for inference:
model.load_state_dict(torch.load("tiny_mlp.pth"))
model.eval()
By completing this project, I learned:
- How backpropagation works in practice
- How PyTorch autograd builds computation graphs
- How to implement training loops manually
- How to evaluate properly with validation data
- How softmax differs from logits
- How to save / load deep learning models
- How confidence calibration works
- How batch inference is structured
- How real-world training pipelines operate
This project reflects my understanding of:
✅ Neural network fundamentals
✅ PyTorch model development
✅ Autograd & gradients
✅ Validation workflow
✅ Inference pipeline
✅ Model confidence analysis
This is my first complete PyTorch model project, and it serves as the foundation for:
- CNNs
- RNNs
- Transformers
- Deployment projects
Rajesh Arigala
MIT License
