A deep learning project implementing a Convolutional Neural Network (CNN) for handwritten digit classification on the MNIST dataset using PyTorch.
- ✅ CNN-based architecture (Conv + Pool + FC)
- ✅ Dropout for regularization
- ✅ Normalized input pipeline
- ✅ Modular project structure (production-ready)
- ✅ Cross-platform execution (Windows + Linux)
- ✅ ~99.26% test accuracy
- ✅ Visualization of predictions
Architecture Details:
- Input:
1 × 28 × 28 - Conv1:
32 filters, 3×3→ ReLU → MaxPool →32 × 14 × 14 - Conv2:
64 filters, 3×3→ ReLU → MaxPool →64 × 7 × 7 - Flatten →
3136 - FC1:
128 neurons→ ReLU → Dropout (0.25) - FC2:
10 output classes
- Test Accuracy:
99.26% - Final Training Loss:
~0.026 - Stable convergence within 5 epochs
mnist-digit-classifier/
│
├── src/
│ ├── data/ # Data loading & preprocessing
│ ├── models/ # CNN architecture
│ ├── training/ # Training loop
│ ├── evaluation/ # Evaluation logic
│ └── utils/ # Visualization
│
├── configs/ # Hyperparameters
├── scripts/ # Run scripts
├── outputs/ # Saved models
├── notebooks/ # Jupyter experimentsgit clone <your-repo-url>
cd mnist-digit-classifier
pip install -r requirements.txtpython -m src.training.trainbash scripts/train.shpython src/training/train.pypython -m src.evaluation.evaluateOutput:
Test Accuracy: 99.26%The model:
- Correctly classifies handwritten digits
- Generalizes well to different writing styles
- Achieves high confidence predictions
- Python
- PyTorch
- Torchvision
- Matplotlib
- 📊 Confusion Matrix & Precision/Recall
- 🔥 Grad-CAM visualization
- 📈 Training curves (loss/accuracy plots)
- 🧪 Hyperparameter tuning
- 🌐 Deploy as web app (Streamlit)
- Dataset auto-downloads (no manual setup needed)
- GPU used automatically if available
- Modular design → easy to extend and scale
- MNIST Dataset
- PyTorch Documentation
Gopal Gupta
If you found this useful, consider giving it a ⭐ on GitHub!

