3. Interpretable Machine Learning

The basic way to create an interpretable machine learning model is to use a subset of fundamental algorithms and model-agnostic methods. Linear regression, logistic regression, and the decision tree are commonly used machine learning models. At the same time, variable importance, partial dependence plots, and individual conditional expectation plots are basic model-agnostic methods.

In the following chapters, we will talk about these models. We try not to go too detailed because there is already a ton of books, videos, tutorials, papers, and more material available. What we will focus on, is how to interpret the models and in addition to it, how to evaluate with the evaluation system we are structuring.

There some good book that discusses linear regressionarrow-up-right, logistic regressionarrow-up-right, other linear regression extensionsarrow-up-right, decision treesarrow-up-right, decision rulesarrow-up-right, and the RuleFit algorithmarrow-up-right in more detail. It also lists other interpretable modelsarrow-up-right. As well as introductions on model-agnostic methods like Partial Dependence Plot (PDP)arrow-up-right, Individual Conditional Expectation (ICE)arrow-up-right, Accumulated Local Effects (ALE) Plotarrow-up-right, Local Surrogate (LIME)arrow-up-right, Shapley Valuesarrow-up-right, SHAP (SHapley Additive exPlanations)arrow-up-right and morearrow-up-right.

NOTE: This book is only focused on some specific interpretable models and model-agnostic methods, and discuss the evaluation of those models and methods.

Last updated