4 Significant Limitations of SHAP
Вставка
- Опубліковано 9 кві 2023
- SHAP is the most powerful Python package for understanding and debugging your machine learning models. Yet, it still has its limitations. Understanding these is critical to avoid incorrect conclusions when using the package. We explore the 4 most significant limitations of SHAP: issues with the package, feature dependencies, causal inference and human error.
*NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
SHAP course: adataodyssey.com/courses/shap...
XAI course: adataodyssey.com/courses/xai-...
Newsletter signup: mailchi.mp/40909011987b/signup
Read the companion article (no-paywall link): towardsdatascience.com/the-li...
Medium: / conorosullyds
Twitter: / conorosullyds
Mastodon: sigmoid.social/@conorosully
Website: adataodyssey.com/
*NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
SHAP course: adataodyssey.com/courses/shap-with-python/
XAI course: adataodyssey.com/courses/xai-with-python/
Newsletter signup: mailchi.mp/40909011987b/signup
I really enjoyed such a deep discussion about the clear distinction between correlation and causation!
Thanks Sarem! A very important concept when it comes to XAI. I am definitely guilty of jumping to causality conclusions without enough evidence.
Great video series. Don’t stop making them. Maybe take another app/tool/methodology and break it into parts like you did with SHAP. Very digestible.
Thanks! Planning some more videos soon
Amazing work, Conor! Keep them coming. These 6 mins have helped clarify so many topics!
Great to hear! I’m glad I could help.
best youtuber explaining SHAP I have found!
Thank you! I am here to help :)
Amazing video. Thank you so much.
I have one question please: When explaining kernelShap, what do you mean by permuting values, please? What does mean grey circles in the graph at time 2.28, please? Does permuting refer to changing features order ( this is not clear in the graph in video at 2.28) or it refers to replacing some feature values with random values?
Thank in advance for your response
Great video man. Thank you very much.
I’m glad you enjoyed it Aziz!
good explanation on topic ,
thank you sir
Thank you Jenil!
AMAZING WORK!
I really appreciate that!
Great explanation!
Thank you :)
Great video. You mentioned that KernelSHAP suffers from extrapolation if features are correlated, like other permutation based methods. What about TreeSHAP with e.g., XGBoost?
Hi Sasa, this is a great question. To be honest, I don't completely understand the TreeSHAP algorithm.
Looking into some other literature, it seems like TreeSHAP is not effected by correlations in the same way as KernelSHAP. "KernelSHAP ignores feature dependence. ... TreeSHAP solves this problem by explicitly modeling the conditional expected prediction." Then they go on to say "While TreeSHAP solves the problem of extrapolating to unlikely data points, it does so by changing the value function and therefore slightly changes the game. TreeSHAP changes the value function by relying on the conditional expected prediction. With the change in the value function, features that have no influence on the prediction can get a TreeSHAP value different from zero." You can read more here: christophm.github.io/interpretable-ml-book/shap.html
@@adataodyssey great, thanks for the answer