Skip to content

Wadalisa/Classification-Comparison

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

31 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ§ βš”οΈ CLASSIFICATION COMPARISON QUEST - MAIN QUEST

β€œBalance the data. Sharpen the model. Farm insight, not just accuracy.”


πŸ—ΊοΈ Quest Overview

This repository contains a classification comparison experiment developed for COS711 Assignment 3. The quest explores how model choice and enhancement strategies affect classification performance, with a focus on understanding why improvements succeed or fail.

Two main builds are explored:

  • βš”οΈ A baseline classifier
  • πŸš€ An enhanced classifier (ResNet-based)

Rather than chasing raw scores, the quest focuses on diagnosing weaknesses, learning from failed upgrades, and identifying clear evolution paths.


βš™οΈ Gear Equipped β€” Tech Stack

  • Python 3
  • NumPy & Pandas β€” data handling
  • Matplotlib / Seaborn β€” visual diagnostics
  • Scikit-learn β€” evaluation metrics
  • PyTorch / Torchvision β€” deep learning models

πŸ“Š Data Preparation β€” World Setup

  • Dataset loaded and split into train / validation / test sets
  • Labels provided for supervised learning

Basic preprocessing was applied to enable model training and evaluation.


🧠 Experimental Setup β€” Strategy Phase

🧩 Builds Compared

  • Baseline Model

    • Simple classifier used as a reference point
  • Enhanced Model

    • Deeper architecture using ResNet
    • Intended to improve feature extraction and generalization

Both builds use the same dataset splits and evaluation metrics to ensure a fair comparison.


🏟️ Classification Results β€” Battle Arena

Evaluation focuses on:

  • Overall accuracy
  • Confusion matrices
  • Qualitative inspection of class-wise behaviour

The enhanced model shows only marginal improvement, highlighting that architectural upgrades alone are not always sufficient.


πŸš€ Classification Enhancement β€” Power-Ups

Enhancement attempts focused on:

  • Increasing model depth
  • Leveraging pretrained-style architectural ideas (ResNet)

While improvements were observed, gains were limited, suggesting bottlenecks outside the model architecture itself.


🚧 Known Weaknesses & Side Quests

The main quest is complete, but several side quests remain to unlock the build’s full potential.

🧩 Side Quest 1: Data Insight & Class Balance

  • Limited Exploratory Data Analysis (EDA)
  • Class imbalance not explicitly addressed

Why it matters:

  • Models may optimize for majority classes
  • Overall accuracy can mask poor minority-class performance

πŸ“‰ Side Quest 2: Mediocre Performance Ceiling

  • Classification results plateau early
  • Marginal gains from architectural enhancement

Why it matters:

  • Indicates data-level or loss-function-level limitations

πŸ” Side Quest 3: Result Interpretability

  • Confusion matrices presented in text form
  • Hard to visually diagnose class-wise errors

Why it matters:

  • Visual diagnostics speed up failure analysis

πŸ› οΈ Future Upgrades (Patch Notes β€” v1.1)

Planned upgrades for the next iteration:

  • Add explicit EDA (class distributions, samples per class)

  • Apply class imbalance handling:

    • Class-weighted loss
    • Oversampling / data augmentation
  • Replace text confusion matrices with heatmap visualizations

  • Tune hyperparameters (learning rate, batch size, epochs)

These upgrades target root causes, not just surface-level performance.


🏁 Quest Status

🧩 Main Quest: Classification Comparison 🎯 Objective: Understand performance bottlenecks πŸš€ Outcome: Functional comparison with clear evolution paths


πŸ‘€ Player Profile

Wadalisa Oratile Molokwe Honours Student | Network Engineer & System Administrator


GitHub quest log β€” built for learning, reflection, and long-term evolution.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published