3:33 : github and colab links to code 5:20 : why is it important? Data bias 12:42: Explain like i'm 5 14:19 : Introduction to Interoperability (Jupyter code) 15:38 : sklearn.compose import column.transformer 20:55 : train, test 21.41: white box models, logistic regression 30:00: probability , score explained. 35.27 : Decision tree 36.28 : LIME 45:17 : LIME API 46:00: Random Forest 60:23 : SHAP 64:05 : SHAP API 75:31 : no tabular data 83:00 : Conclusion
55:58 According to LIME, do these blue contributions of features really sum up to the probability of 0.71, if we show all contributions? Similarly, the orange ones are we sure that sum up to 0.29? I have examples of making me confused about this....
3:33 : github and colab links to code
5:20 : why is it important? Data bias
12:42: Explain like i'm 5
14:19 : Introduction to Interoperability (Jupyter code)
15:38 : sklearn.compose import column.transformer
20:55 : train, test
21.41: white box models, logistic regression
30:00: probability , score explained.
35.27 : Decision tree
36.28 : LIME
45:17 : LIME API
46:00: Random Forest
60:23 : SHAP
64:05 : SHAP API
75:31 : no tabular data
83:00 : Conclusion
I appreciate that well explained.
55:58 According to LIME, do these blue contributions of features really sum up to the probability of 0.71, if we show all contributions? Similarly, the orange ones are we sure that sum up to 0.29? I have examples of making me confused about this....