A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
-
Updated
Jun 3, 2024 - C++
A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
Fit interpretable models. Explain blackbox machine learning.
Interpretable Error Function learning
📜 [NeurIPS 2022] "Symbolic Distillation for Learned TCP Congestion Control", S P Sharan, Wenqing Zheng, Kuo-Feng Hsu, Jiarong Xing, Ang Chen, Zhangyang Wang
counterfactual explanations for XGBoost and tree ensemble models - counterfactual reasoning - model interpretability
General-purpose library for extracting interpretable models from Multi-Agent Reinforcement Learning systems
Implementation of "Toward an Interpretable Alzheimer's Disease Diagnostic Model with Regional Abnormality Representation via Deep Learning"
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."