2 Comments

Excellent writing, Enrico, thank you!

One comment: trees and rules looks to be interpretable by design, but they are NOT understandable in many real-world applications. The reasons are depth of trees, number of distinct variables involved in trees, and mismatch between high level user's concepts and low-level features used in explanations. We published a couple of papers on this matter recently:

- Re-interpreting Rules Interpretability, https://doi.org/10.1007/s41060-023-00398-5

- Visual Analytics for Human-Centered Machine Learning, https://doi.org/10.1109/MCG.2021.3130314

Let's discuss at IEEE VIS this year!

Best wishes,

Gennady

Expand full comment
author

I could not agree more! My point there is that rules and trees can be used as "surrogates" to describe existing models. We worked on a couple of papers that focus on this model surrogate idea, and I completely agree that they quickly become very hard to interpret. I'll read your papers to learn more on the subject. Thanks for sharing!

My next few posts will dive deeper into each of the three classes, focusing more on the visualization and interpretability aspects. There will be one on rule and trees and your material will be probably very useful there.

Let's discuss at VIS!

Expand full comment