How To Interpret The ML Model? Is Your Model Black Box? Lime Library
Вставка
- Опубліковано 28 лис 2020
- github:github.com/krishnaik06/Lime-M...
In this video we will see how we can train ML models using CPU Multicores
⭐ Kite is a free AI-powered coding assistant that will help you code faster and smarter. The Kite plugin integrates with all the top editors and IDEs to give you smart completions and documentation while you’re typing. I've been using Kite for a few months and I love it! www.kite.com/get-kite/?...
All Playlist In My channel
Interview Playlist: • Machine Learning Inter...
Complete DL Playlist: • Complete Road Map To P...
Julia Playlist: • Tutorial 1- Introducti...
Complete ML Playlist : • Complete Machine Learn...
Complete NLP Playlist: • Natural Language Proce...
Docker End To End Implementation: • Docker End to End Impl...
Live stream Playlist: • Pytorch
Machine Learning Pipelines: • Docker End to End Impl...
Pytorch Playlist: • Pytorch
Feature Engineering : • Feature Engineering
Live Projects : • Live Projects
Kaggle competition : • Kaggle Competitions
Mongodb with Python : • MongoDb with Python
MySQL With Python : • MYSQL Database With Py...
Deployment Architectures: • Deployment Architectur...
Amazon sagemaker : • Amazon SageMaker
Please donate if you want to support the channel through GPay UPID,
Gpay: krishnaik06@okicici
Telegram link: t.me/joinchat/N77M7xRvYUd403D...
Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more
/ @krishnaik06
Please do subscribe my other channel too
/ @krishnaikhindi
Connect with me here:
Twitter: / krishnaik06
Facebook: / krishnaik06
instagram: / krishnaik06
#LimePackage
Very informative.
Adding to this, we can use SHAP and PDP plots for global interpretation (to get an overall view). And ALE, ICE and Lime for local interpretation (to study individual instances).
I used knn to gather similar data points for the input of Lime interpretation as it trains a local interpretable model.
@Krish Just in time! Was going to make a notebook Model Validation , for when target is continuous, was going to plot the line plot with points of residuals for continuous features and box plot of residuals for discrete features to check whether the model is stable or not.
Will use this also in the notebook. Thanks a lot!
By the way, Useful shortcut, "A" to insert row above, "B" to insert row below.
Just what I needed! Thank you!
Great sir really appreciable for your enthusiasm
It is Explainable AI. LIME ,ELI5, SHAP these can be used . But we can't use for all models as some are model agnostic. Again nice one from Krish
can you tell me about SHAP and LIME? and where can i learn about it? It would be of great help!!
Ya you can learn from here ua-cam.com/video/VB9uV-x0gtg/v-deo.html
Great, very informative!
Pls make a video on SHAP
Thanks a lot your videos are helpful
We can use GradCAM or Saliency Maps in case of Interpreting Deep Learning models
How is model interpretation offered by lime different from feature importance method given in randomforest?
Very informative..
Try SHAP , which is also very good foe explainable AI.
hello sir are we assuming that the features are independent of each other (no collinearity) before passing the feature for interpreting through lime
Should I never install package in my conda base env?
krish sir plz make vedios on visualizing deep learning models through attention mechanisms and gradCAM
While using Lime to interpret XGBoost in the interpretor.explain_instance step getting below error:
Feature name mismatch....
ValueError: feature_names mismatch . any idea how to resolve the same. I am having xgb version 0.90
Will this work if we had 3 categories in target variable???
Please make videos on drift analysis
Can i get a heart❤❤
This is local interpreter, how to explain the overall model?
Kindly put a video about Language Interpretable tool too
Please make a video about ZERO -SHOT LEARNING 🧠🧠🧠🧠
Can I use this for LSTM or complex models? As far, I am aware I can use lime on linear models but LSTM and other Deep neural networks are not linear. Please update me, if I am wrong. Also can you please make video on LRP (Layer wise relevance propagation) or some similar techniques.
LIME is model-agnostic, hence can be used for any model.
Can i have heart
♥