🎓
An Evaluation System
  • An Evaluation System for Interpretable Machine Learning
  • 1. Abstract
  • 2. Introduction
  • 3. Interpretable Machine Learning
    • 3.1 Definition
    • 3.2 Methods
      • 3.2.1 Machine Learning Model
      • 3.2.2 Model-Agnostic
      • 3.2.3 Sample Theory
  • 4. Evaluation System of Interpretable Machine Learning
    • 4.1 Definition
    • 4.2 The Structure of the Evaluation System
      • 4.2.1 Interpretable Models(Model specific)
      • 4.2.2 Model Agnostic
      • 4.2.3 Human Explanations
    • 4.3 Reference Score Table
  • 5. Customized - Interpretable Machine Learning Model
    • 5.1 Why & How the user customize their model?
    • 5.2 Example
      • 5.2.1 Suggestion Lists
      • 5.2.2 Notebook
    • 5.3 Results & Explanation
  • 6. Discussion
  • 7. Citation and License
Powered by GitBook
On this page

Was this helpful?

  1. 4. Evaluation System of Interpretable Machine Learning

4.1 Definition

There are several reasons that make the evaluation of interpretable machine learning such a difficult topic. First, some of the models that need to be interpreted remain black boxes. Moreover, the demand for the ability to question, understand, and trust machine learning systems from other domains increased significantly. However, the evaluation of interpretable machine learning models is a relatively subjective topic, and it's hard to provide the most suitable metrics to assess the quality of an explanation.

The evaluation system we created in this book, is based on the logic we extracted from a well-accepted evaluation taxonomy. And more work needs to be done in the future to support the fairness and comprehensiveness of this evaluation system structure.

Previous4. Evaluation System of Interpretable Machine LearningNext4.2 The Structure of the Evaluation System

Last updated 4 years ago

Was this helpful?