5.1 Why & How the user customize their model?
Last updated
Was this helpful?
Last updated
Was this helpful?
The evaluation of the interpretable machine learning is and will always be a comparatively subjective topic. From the type of datasets to the background knowledge of the domain that target value represents and the target community, all these topics have a great extent of flexibility. That's one of the reasons that the evaluation of the interpretability and the explainability could be such a difficult problem.
Therefore, the question becomes what makes a good interpretable notebook and how to create a well-interpreted machine learning model? With the usage of the evaluation system, however, this question can be solved to some extent. As long as the evaluation system is decided by those criteria and features, a good interpretable model can be built by fulfilling some of the particular features.
In order to simplify this question, we take advantage of the evaluation system we generated (). Each keyword in the system can be viewed as a special feature of the interpretable machine learning model. And for the Model-Specific methods and the Model-Agnostic methods tables, we come up with the score tables correspondingly, with the score of each method in each domain.
We provide selection on both the Model-Specific methods and the Model-Agnostic methods. According to individual features on the evaluation table, the users can make the selection on the specific feature they would like to achieve. Therefore, based on the users' preferred features and the score tables, we can generate two lists of suggested methods. One for Model-Specific methods and the other one for Model-Agnostic methods.