An automated teaching assistant for evaluating MLOps course projects using LLM-powered code analysis and GitHub repository scraping.
MLOps Mentor helps teaching assistants evaluate student projects in Machine Learning Operations (MLOps) courses by:
- Scraping GitHub repositories for comprehensive metrics (commits, PRs, code structure, CI/CD status)
- Analyzing code quality, unit testing, and CI/CD practices using LLM judges
- Visualizing results through an interactive leaderboard dashboard
The tool automates the tedious parts of grading while providing detailed, consistent feedback on student submissions.
- Clone the repository:
git clone https://github.com/rasgaard/mlops-mentor.git
cd mlops-mentor- Install dependencies with uv:
uv sync- Configure environment variables:
cp .env.template .env- Prepare your
group_info.csvfile with student repository URLs:
group_nb,student 1,student 2,student 3,student 4,student 5,github_repo
1, s123456, s654321, , , ,https://github.com/user/repo1
2, s111111, s222222, s333333, , ,https://github.com/user/repo2Evaluate all repositories in group_info.csv:
uv run --env-file .env ./src/mlops_mentor/run.pyEach LLM agent evaluates specific aspects:
-
Code Quality (1-5 scale):
- Code structure and organization
- Python best practices (PEP 8, type hints, docstrings)
- Readability and maintainability
- Design patterns and configuration management
-
Unit Testing (1-5 scale):
- Test coverage (unit, integration, E2E)
- Test quality and assertions
- Framework usage (pytest, unittest)
- Mock usage and test isolation
-
CI/CD Practices (1-5 scale):
- Automation setup (GitHub Actions, etc.)
- Pipeline quality and best practices
- Testing and deployment automation
- Documentation and configuration
- Nicki Skafte Detlefsen (nsde@dtu.dk)
- Rasmus Aagaard (roraa@dtu.dk)