Open the Black Box: an Introduction to Model Interpretability with LIME and SHAP - Kevin Lemagnen

Поділитися
Вставка
  • Опубліковано 1 гру 2024

КОМЕНТАРІ • 3

  • @maheshmm2
    @maheshmm2 2 роки тому +14

    3:33 : github and colab links to code
    5:20 : why is it important? Data bias
    12:42: Explain like i'm 5
    14:19 : Introduction to Interoperability (Jupyter code)
    15:38 : sklearn.compose import column.transformer
    20:55 : train, test
    21.41: white box models, logistic regression
    30:00: probability , score explained.
    35.27 : Decision tree
    36.28 : LIME
    45:17 : LIME API
    46:00: Random Forest
    60:23 : SHAP
    64:05 : SHAP API
    75:31 : no tabular data
    83:00 : Conclusion

  • @narotian
    @narotian 2 роки тому +2

    I appreciate that well explained.

  • @bryanparis7779
    @bryanparis7779 2 роки тому

    55:58 According to LIME, do these blue contributions of features really sum up to the probability of 0.71, if we show all contributions? Similarly, the orange ones are we sure that sum up to 0.29? I have examples of making me confused about this....