You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Interpretability in machine learning (ML) is crucial for high stakesdecisions and troubleshooting. In this work, we provide fundamental principlesfor interpretable ML, and dispel common misunderstandings that dilute theimportance of this crucial topic. We also identify 10 technical challenge areasin interpretable machine learning and provide history and background on eachproblem. Some of these problems are classically important, and some are recentproblems that have arisen in the last few years. These problems are: (1)Optimizing sparse logical models such as decision trees; (2) Optimization ofscoring systems; (3) Placing constraints into generalized additive models toencourage sparsity and better interpretability; (4) Modern case-basedreasoning, including neural networks and matching for causal inference; (5)Complete supervised disentanglement of neural networks; (6) Complete or evenpartial unsupervised disentanglement of neural networks; (7) Dimensionalityreduction for data visualization; (8) Machine learning models that canincorporate physics and other generative or causal constraints; (9)Characterization of the "Rashomon set" of good models; and (10) Interpretablereinforcement learning. This survey is suitable as a starting point forstatisticians and computer scientists interested in working in interpretablemachine learning.
AkihikoWatanabe
changed the title
あ
Interpretable Machine Learning: Fundamental Principles and 10 Grand
Challenges, Cynthia Rudin+, N/A, arXiv'21
Aug 24, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: