GlassBox at MIT PolicyHackathon April-2019. Bringing transparency and auditability to recidivism risk-assessment and parole/pre-trial decision-making process.
Making AI more interpretable and transparent to the judges while adjudicating criminal cases. Currently available recidivism assessment tools (e.g. Compas) provides black-box recommendations to judges.
- Scikit-learn
- Numpy
- Python v3.6
- Bootstrap
- HTML 5
- CSS 3
- Software transparency
- Decision-making process auditability
Recidivism risk-assessment software must provide statistically significant score followed by a list of contributing factors, allowing judges to tweak the inputs to influence the score
Changes to original score must be justified by the judges, subject to appeal court’s audit
- Displays the likelihood of recidivism (in %)
- Offers the choice to pull out Top-K(1<=K<=30) factors that contribute to the likelihood of recidivism score
- Allows toggling amongst all of the factors to see the score deviation
- We won the 'Artificial Intelligence' category prize in this MIT policy hackathon in April 2019
- We were one of the Top 5 teams to compete for the 1st prize!
- Training Glass-Box to surpass Compas accuracy
- Providing statistics and benchmarks to judges(individual trajectory, benchmark with state national level)