3. Interpretable Machine Learning
The basic way to create an interpretable machine learning model is to use a subset of fundamental algorithms and model-agnostic methods. Linear regression, logistic regression, and the decision tree are commonly used machine learning models. At the same time, variable importance, partial dependence plots, and individual conditional expectation plots are basic model-agnostic methods.
In the following chapters, we will talk about these models. We try not to go too detailed because there is already a ton of books, videos, tutorials, papers, and more material available. What we will focus on, is how to interpret the models and in addition to it, how to evaluate with the evaluation system we are structuring.
There some good book that discusses linear regression, logistic regression, other linear regression extensions, decision trees, decision rules, and the RuleFit algorithm in more detail. It also lists other interpretable models. As well as introductions on model-agnostic methods like Partial Dependence Plot (PDP), Individual Conditional Expectation (ICE), Accumulated Local Effects (ALE) Plot, Local Surrogate (LIME), Shapley Values, SHAP (SHapley Additive exPlanations) and more.
NOTE: This book is only focused on some specific interpretable models and model-agnostic methods, and discuss the evaluation of those models and methods.
Last updated
Was this helpful?