🎓
An Evaluation System
  • An Evaluation System for Interpretable Machine Learning
  • 1. Abstract
  • 2. Introduction
  • 3. Interpretable Machine Learning
    • 3.1 Definition
    • 3.2 Methods
      • 3.2.1 Machine Learning Model
      • 3.2.2 Model-Agnostic
      • 3.2.3 Sample Theory
  • 4. Evaluation System of Interpretable Machine Learning
    • 4.1 Definition
    • 4.2 The Structure of the Evaluation System
      • 4.2.1 Interpretable Models(Model specific)
      • 4.2.2 Model Agnostic
      • 4.2.3 Human Explanations
    • 4.3 Reference Score Table
  • 5. Customized - Interpretable Machine Learning Model
    • 5.1 Why & How the user customize their model?
    • 5.2 Example
      • 5.2.1 Suggestion Lists
      • 5.2.2 Notebook
    • 5.3 Results & Explanation
  • 6. Discussion
  • 7. Citation and License
Powered by GitBook
On this page

Was this helpful?

  1. 3. Interpretable Machine Learning

3.1 Definition

Previous3. Interpretable Machine LearningNext3.2 Methods

Last updated 4 years ago

Was this helpful?

An interpretable model helps you to understand and account for the factors that are (not) included in the model and account for the context of the problem when taking actions based on model predictions. Improving generalization and performance. High interpretability typically leads to a model that generalizes better.

Flow for an Interpretable Models (Ref. https://christophm.github.io/interpretable-ml-book/agnostic.html)