4.2.1 Interpretable Models(Model specific)
Evaluation System for Interpretable Models
The easiest way to achieve interpretability is to use only a subset of algorithms that create interpretable models. Therefore, the evaluation of these interpretable model algorithms is absolutely basic for a comprehensive evaluation system.
There are two lists from a different perspective under this criterion, one is for the algorithm property and the other one is the ability to interpret.
Explanation Methods
Method Properties
Each method above is further bifurcated to individual metrics as given below:
Explanation Methods
Method Properties
Transparent
Linearity
Parametric
Monotonicity
Algorithmic complexity
Interaction
4.2.2.1 Explanation Methods
We want to explain the predictions of a machine learning model. To achieve this, we rely on some explanation method, which is an algorithm that generates explanations. An explanation usually relates the feature values of an instance to its model prediction in a humanly understandable way. Other types of explanations consist of a set of data instances
Definitions:
Transparent: A model is transparent if we can explain the inner operation of the model based on it's training.
Parametric: If a model uses its inner parameters to explain it's results, it is considered to be Parametric
Algorithmic Complexity: It is closely related to the time complexity of a machine learning algorithm. ex: Time complexity for GLMs: if N = # of observations (usually # of rows), and p = # of variables (usually # of columns), it is
O(p^3 + Np^3)
for most standard GLM algorithms.
4.2.2.2 Method Properties
Method properties would help us explain about a model's association between features and target.
Linearity: A model is linear if the association between features and the target is modeled linearly.
Monotonicity: A model with monotonicity constraints ensures that the relationship between a feature and the target outcome always goes in the same direction over the entire range of the feature: An increase in the feature value either always leads to an increase or always to a decrease in the target outcome. Monotonicity is useful for the interpretation of a model because it makes it easier to understand a relationship.
Interaction: Some models can automatically include interactions between features to predict the target outcome. You can include interactions in any type of model by manually creating interaction features. Interactions can improve predictive performance, but too many or too complex interactions can hurt interpretability.
Some models handle only regression, some only classification, and still others both.
From this table, you can select a suitable interpretable model for your task, either regression (regress) or classification (class):
Algorithm
Linear
Monotone
Interaction
Task
Linear regression
Yes
Yes
No
regress
Logistic regression
No
Yes
No
class
Decision trees
No
Some
Yes
class, regress
Last updated
Was this helpful?