This project builds and compares multiple neural network architectures for CIFAR-10 image classification using PyTorch. It trains each model, evaluates test performance, and generates visual artifacts so you can quickly see how architecture choices affect accuracy and prediction behavior.
PyTorch-based CIFAR-10 image classification project that trains and compares three models:
MLP(fully connected baseline)SimpleCNN(2 convolution blocks)ImprovedCNN(deeper CNN with dropout)
The training script saves performance plots and prediction examples for each model, making it easy to compare behavior and accuracy.
cifar.py- main training and evaluation scriptrequirements.txt- Python dependenciesresults_*.png- training loss and test accuracy plots per modelexamples_*.png- one correct and one incorrect prediction example per modelresults.pdf- exported report/summary artifact
This project uses the CIFAR-10 dataset from torchvision.datasets.CIFAR10.
- Training set and test set are downloaded automatically to
./dataon first run. - Images are normalized with mean/std
(0.5, 0.5, 0.5).
Use Python 3.10+ (recommended) and install dependencies:
pip install -r requirements.txtpython cifar.pyThe script will:
- Load CIFAR-10
- Train
MLP,SimpleCNN, andImprovedCNN - Evaluate each model on the test set
- Save:
results_MLP.png,results_SimpleCNN.png,results_ImprovedCNN.pngexamples_MLP.png,examples_SimpleCNN.png,examples_ImprovedCNN.png
- Optimizer: SGD (
lr=0.001,momentum=0.9) - Loss: Cross-entropy
- Maximum epochs: 10
- Early stop condition: training stops early if average training loss increases between epochs
torchtorchvisionmatplotlib