4.1 Definition
There are several reasons that make the evaluation of interpretable machine learning such a difficult topic. First, some of the models that need to be interpreted remain black boxes. Moreover, the demand for the ability to question, understand, and trust machine learning systems from other domains increased significantly. However, the evaluation of interpretable machine learning models is a relatively subjective topic, and it's hard to provide the most suitable metrics to assess the quality of an explanation.
The evaluation system we created in this book, is based on the logic we extracted from a well-accepted evaluation taxonomy. And more work needs to be done in the future to support the fairness and comprehensiveness of this evaluation system structure.
Last updated
Was this helpful?