How to interpret
Вставка
- Опубліковано 5 бер 2023
- HHow to interpret #shapley Summary Plot | #beeswarm Plot Interpretation | #ExplainableAI #XAI
Topics Covered:
1. Build #xgboost classifier
2. Generate xgboost explainer and shap values
3. plot shapley summary plot
4. How to Interpret shapely summary plot
5. Understanding feature importance using shapely values
5. How it works for categorical features?
If you find this video helpful, don't forget to like share and subscribe. This is how you can support me.
Connect me:
LinkedIn: / ashutoshtripathiai
Instagram: / ashutoshtripathi_ai
Twitter: / ashutosh_ai
Website: ashutoshtripathi.com
If you want to message me directly, then connect me on LinkedIn and send a DM.
#ExplainableAI #FeatureImportance #shapley #xgboost #xai
if you face resolution or clarity issue in video, please try to increase the video quality from setting menu to 720p or more.. Thank You.
Awesome explanation. Waiting for the SHAP categorical variables video.
Best video till now , Sir please continue the series and kindly make a playlist of it , waiting for the explanation of categorical features.
Awesome video! Very helpful. Thank you for your insightful work.
Thank you
Thankyou for such a wonderful explanation. its the best video for understanding these plots. I have plotted these graphs for a multiclass problem and in that also i am getting one threshold line at 0.0. How to interpret this multiclass problem (3 classes). It would be of great help if you can reply. Thankyou.
By far the best and easiest to understand explanation of Bee Swarm plot. Kudos to your effort Mr.Tripathi.
Thank you very much. Glad you liked it.
Nice In-depth explaination !
Thank you.
Sir, I watched this video and tried it in my own dataset very well, I got a lot of insights and I would like to say thank you. Now I'm learning explainable boosting machine(EBM) on my own, but I have one confusion can we use EBM for multiclass classification problems? such as IRIS data. There is no video regarding this topic on the whole youtube.
I will create a video on this and upload. You will get notified. thanks
Sir, can we use SHAP to interpret feature contributions in glass box models like LR and EBM also?
Shap works best in black box models. Still give it a try with white box or glass box models and see the behaviour.
Sir for iris data can you make video how lime is used to explain the prediction result
Yeah. Will upload soon.
@@AshutoshTripathi_AI Your Videos are on point and so informative. I did not find any video on youtube how to interpret multiclass classification using LIME and I believe you can make this concept crystal clear soon.
Sir, If every dot is the feature's SHAP value for one specific instance, shouldn't every feature show the same number of dots? Then why do some features have more instances than others
Yes, all features will show same no of dots. Sometimes they get overlapped that is why we see let dots in particular feature. You can verify same by reducing the size and may be plot it for only 4 or 5 records and veryfy the same.
I have already shown the same with 5 records you can watch the video at 11.49 mark. Let me know if you have further queries.
I think you mean shapley values, not shapely values.
You are right thanks for pointing.