Overview
The methodology page (codebenders-dashboard/app/methodology/page.tsx) currently exposes internal implementation details that are not useful (and potentially confusing) to end users like advisors and administrators. We also want to make the scoring math tangible with a concrete example.
Changes Required
1. Remove internal implementation references
Remove the following from the page header badges and the "Data Source" section:
- Badges: Remove
Script: generate_readiness_scores.py and Table: llm_recommendations badges — these are internal details. Keep Version: rules_v1.
- "Data Source" section: Remove the entire section. It exposes internal table names (
student_level_with_predictions, llm_recommendations, readiness_generation_runs), the script path, and the re-run command — none of which are meaningful to end users.
2. Add a worked example section
Add a new section titled "Worked Example" (placed after the Scoring Formula section, before FERPA) with two side-by-side examples — one High Readiness and one Low Readiness — to show the full range of outcomes.
Example A — High Readiness: Maria T.
First-generation student, Part-time enrollment
| Input |
Value |
| GPA (Year 1) |
3.2 |
| Course completion rate |
0.83 |
| Passing rate |
0.78 |
| Gateway courses completed |
Math only (1 of 2) |
| Credits earned (Year 1) |
9 |
| Enrollment intensity |
Part-time |
| Courses enrolled |
5 |
| Math placement |
College-level (C) |
| Retention probability (ML) |
0.72 |
| At-risk alert |
MODERATE |
Academic sub-score (avg of 5):
- GPA: 3.2 / 4.0 = 0.80
- Course completion: 0.83
- Passing rate: 0.78
- Gateway: 0.5 + 0.25 = 0.75 (1 gateway done)
- Credit momentum: 9 credits → 0.60
- Academic = (0.80 + 0.83 + 0.78 + 0.75 + 0.60) / 5 = 0.752
Engagement sub-score (avg of 3):
- Enrollment intensity: PT → 0.50
- Courses enrolled: 5 / 10 = 0.50
- Math placement: C → 1.00
- Engagement = (0.50 + 0.50 + 1.00) / 3 = 0.667
ML sub-score (avg of 2):
- Retention probability: 0.72
- At-risk alert: MODERATE → 0.60
- ML = (0.72 + 0.60) / 2 = 0.660
Final score:
(0.752 × 0.40) + (0.667 × 0.30) + (0.660 × 0.30) = 0.301 + 0.200 + 0.198 = 0.699 → High Readiness ✓
Example B — Low Readiness: Jordan M.
Remedial-track student, Part-time enrollment
| Input |
Value |
| GPA (Year 1) |
1.8 |
| Course completion rate |
0.55 |
| Passing rate |
0.50 |
| Gateway courses completed |
Neither (0 of 2) |
| Credits earned (Year 1) |
4 |
| Enrollment intensity |
Part-time |
| Courses enrolled |
3 |
| Math placement |
Remedial (R) |
| Retention probability (ML) |
0.38 |
| At-risk alert |
HIGH |
Academic sub-score (avg of 5):
- GPA: 1.8 / 4.0 = 0.45
- Course completion: 0.55
- Passing rate: 0.50
- Gateway: 0.50 (0 gateways done)
- Credit momentum: 4 credits → 0.30 (< 6)
- Academic = (0.45 + 0.55 + 0.50 + 0.50 + 0.30) / 5 = 0.460
Engagement sub-score (avg of 3):
- Enrollment intensity: PT → 0.50
- Courses enrolled: 3 / 10 = 0.30
- Math placement: R → 0.20
- Engagement = (0.50 + 0.30 + 0.20) / 3 = 0.333
ML sub-score (avg of 2):
- Retention probability: 0.38
- At-risk alert: HIGH → 0.30
- ML = (0.38 + 0.30) / 2 = 0.340
Final score:
(0.460 × 0.40) + (0.333 × 0.30) + (0.340 × 0.30) = 0.184 + 0.100 + 0.102 = 0.386 → Low Readiness
Acceptance Criteria
Branch
rebranding/task-15-methodology-worked-example → PR into rebranding/bishop-state
Overview
The methodology page (
codebenders-dashboard/app/methodology/page.tsx) currently exposes internal implementation details that are not useful (and potentially confusing) to end users like advisors and administrators. We also want to make the scoring math tangible with a concrete example.Changes Required
1. Remove internal implementation references
Remove the following from the page header badges and the "Data Source" section:
Script: generate_readiness_scores.pyandTable: llm_recommendationsbadges — these are internal details. KeepVersion: rules_v1.student_level_with_predictions,llm_recommendations,readiness_generation_runs), the script path, and the re-run command — none of which are meaningful to end users.2. Add a worked example section
Add a new section titled "Worked Example" (placed after the Scoring Formula section, before FERPA) with two side-by-side examples — one High Readiness and one Low Readiness — to show the full range of outcomes.
Example A — High Readiness: Maria T.
First-generation student, Part-time enrollment
Academic sub-score (avg of 5):
Engagement sub-score (avg of 3):
ML sub-score (avg of 2):
Final score:
Example B — Low Readiness: Jordan M.
Remedial-track student, Part-time enrollment
Academic sub-score (avg of 5):
Engagement sub-score (avg of 3):
ML sub-score (avg of 2):
Final score:
Acceptance Criteria
generate_readiness_scores.pyorllm_recommendationsBranch
rebranding/task-15-methodology-worked-example→ PR intorebranding/bishop-state