Timeline for Reasons that LIME and SHAP might not agree with intuition
Current License: CC BY-SA 4.0
6 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jul 13, 2021 at 12:29 | comment | added | Björn | Sure, that's also a possibility. Some of my options are particular variations of wrong models that are particularly common, but all sorts of wrong models could lead to weird explanations (and predictions). | |
| Jul 13, 2021 at 11:20 | comment | added | user3494047 | I think one thing that is missed in this answer (unless I misunderstood) is that perhaps the model that LIME and SHAP are trying to explain is just plain wrong. LIME and SHAP try to have features that make sense with the model, but perhaps the model is wrong. | |
| Jan 21, 2020 at 19:40 | comment | added | AmeySMahajan | Thanks @Björn. I'm marking your answer as accepted, as I believe it covers the most relevant points here. Cheers | |
| Jan 21, 2020 at 19:38 | vote | accept | AmeySMahajan | ||
| Jan 21, 2020 at 12:08 | comment | added | usεr11852 | +1 because of mentioning Caruana, an absolute ML unit. (And obviously because it is a good answer!) | |
| Jan 21, 2020 at 11:47 | history | answered | Björn | CC BY-SA 4.0 |