4 Significant Limitations of SHAP

Поділитися
Вставка
  • Опубліковано 9 кві 2023
  • SHAP is the most powerful Python package for understanding and debugging your machine learning models. Yet, it still has its limitations. Understanding these is critical to avoid incorrect conclusions when using the package. We explore the 4 most significant limitations of SHAP: issues with the package, feature dependencies, causal inference and human error.
    *NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
    SHAP course: adataodyssey.com/courses/shap...
    XAI course: adataodyssey.com/courses/xai-...
    Newsletter signup: mailchi.mp/40909011987b/signup
    Read the companion article (no-paywall link): towardsdatascience.com/the-li...
    Medium: / conorosullyds
    Twitter: / conorosullyds
    Mastodon: sigmoid.social/@conorosully
    Website: adataodyssey.com/

КОМЕНТАРІ • 21

  • @adataodyssey
    @adataodyssey  3 місяці тому

    *NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
    SHAP course: adataodyssey.com/courses/shap-with-python/
    XAI course: adataodyssey.com/courses/xai-with-python/
    Newsletter signup: mailchi.mp/40909011987b/signup

  • @saremish
    @saremish 6 місяців тому +2

    I really enjoyed such a deep discussion about the clear distinction between correlation and causation!

    • @adataodyssey
      @adataodyssey  6 місяців тому +1

      Thanks Sarem! A very important concept when it comes to XAI. I am definitely guilty of jumping to causality conclusions without enough evidence.

  • @Hoxle-87
    @Hoxle-87 11 місяців тому +6

    Great video series. Don’t stop making them. Maybe take another app/tool/methodology and break it into parts like you did with SHAP. Very digestible.

    • @adataodyssey
      @adataodyssey  11 місяців тому

      Thanks! Planning some more videos soon

  • @shrishchandrapandey801
    @shrishchandrapandey801 6 місяців тому

    Amazing work, Conor! Keep them coming. These 6 mins have helped clarify so many topics!

    • @adataodyssey
      @adataodyssey  6 місяців тому

      Great to hear! I’m glad I could help.

  • @yijunfu6808
    @yijunfu6808 11 місяців тому

    best youtuber explaining SHAP I have found!

    • @adataodyssey
      @adataodyssey  11 місяців тому

      Thank you! I am here to help :)

  • @NA-ug5eq
    @NA-ug5eq День тому

    Amazing video. Thank you so much.
    I have one question please: When explaining kernelShap, what do you mean by permuting values, please? What does mean grey circles in the graph at time 2.28, please? Does permuting refer to changing features order ( this is not clear in the graph in video at 2.28) or it refers to replacing some feature values with random values?
    Thank in advance for your response

  • @azizjedidi1180
    @azizjedidi1180 11 місяців тому

    Great video man. Thank you very much.

    • @adataodyssey
      @adataodyssey  11 місяців тому +1

      I’m glad you enjoyed it Aziz!

  • @jenilsaliya3769
    @jenilsaliya3769 Рік тому +1

    good explanation on topic ,
    thank you sir

  • @cesarepiscopo2549
    @cesarepiscopo2549 5 місяців тому

    AMAZING WORK!

  • @AZ-ph7gg
    @AZ-ph7gg 11 місяців тому

    Great explanation!

  • @sasaglamocak2846
    @sasaglamocak2846 Рік тому

    Great video. You mentioned that KernelSHAP suffers from extrapolation if features are correlated, like other permutation based methods. What about TreeSHAP with e.g., XGBoost?

    • @adataodyssey
      @adataodyssey  Рік тому +3

      Hi Sasa, this is a great question. To be honest, I don't completely understand the TreeSHAP algorithm.
      Looking into some other literature, it seems like TreeSHAP is not effected by correlations in the same way as KernelSHAP. "KernelSHAP ignores feature dependence. ... TreeSHAP solves this problem by explicitly modeling the conditional expected prediction." Then they go on to say "While TreeSHAP solves the problem of extrapolating to unlikely data points, it does so by changing the value function and therefore slightly changes the game. TreeSHAP changes the value function by relying on the conditional expected prediction. With the change in the value function, features that have no influence on the prediction can get a TreeSHAP value different from zero." You can read more here: christophm.github.io/interpretable-ml-book/shap.html

    • @sasaglamocak2846
      @sasaglamocak2846 Рік тому +1

      @@adataodyssey great, thanks for the answer