AI Model Performance Dashboard is a Streamlit-based analytics project for reviewing and comparing AI model evaluation metrics in an interactive browser interface.
This project is positioned as a recruiter-ready data visualization and machine learning reporting portfolio piece. It allows users to compare model accuracy, precision, recall, and F1 score through a clean dashboard experience with metric cards, charts, and summary tables.
This project maps to practical analytics and machine learning workflows used by:
- Data Analysts
- Machine Learning Engineers
- AI Product Teams
- Technical Stakeholders Reviewing Model Quality
- Students Building AI And Data Portfolios
A team may need to answer questions such as:
- Which model performs best overall?
- How do precision and recall differ across models?
- Which metric should be prioritized for a given use case?
- How can model performance be presented clearly to decision-makers?
This dashboard is useful for model review, reporting, portfolio presentation, and lightweight experiment comparison.
- Interactive Model Filter
- Metric Selection Sidebar
- Metric Summary Cards
- Cross-Model Comparison Chart
- Selected Model Detail Table
- Overall Performance Table
- Accuracy Snapshot Chart
- Key Observation Summary
- Python
- Streamlit
- Pandas
- Matplotlib
Dashboard.pyrequirements.txtREADME.md.gitignore
python -m venv .venv
.\.venv\Scripts\Activate.ps1