A Data Odyssey
A Data Odyssey
  • 32
  • 159 332
PDPs and ICE Plots | Python Code | scikit-learn Package
Both Partial Dependence Plots (PDPs) and Individual Conditional Expectation (ICE) plots are a popular explainable AI (XAI) method. They can visualise the relationships used by a machine learning model to make predictions. In this video, we will see how to apply the methods using Python. We will use the scikit-learn package and the PartialDependenceDisplay & partial_dependence functions.
We will see that this allows us to easily visualise the plots including:
- PDPs for individual features
- 2-dimensional PDPs
- Custom ICE Plots
- ICE Plots for categorical features
- ICE Plots for binary target variables
🚀 Free Course 🚀
Signup here: mailchi.mp/40909011987b/signup
XAI course: adataodyssey.com/courses/xai-with-python/
SHAP course: adataodyssey.com/courses/shap-with-python/
🚀 Companion article with link to code (no-paywall link): 🚀
towardsdatascience.com/the-ultimate-guide-to-pdps-and-ice-plots-4182885662aa?sk=31087acdb2a0dbcf44fcce459b643773
🚀 Useful playlists 🚀
XAI: ua-cam.com/play/PLqDyyww9y-1SwNZ-6CmvfXDAOdLS7yUQ4.html
SHAP: ua-cam.com/play/PLqDyyww9y-1SJgMw92x90qPYpHgahDLIK.html
Algorithm fairness: ua-cam.com/play/PLqDyyww9y-1Q0zWbng6vUOG1p3oReE2xS.html
🚀 Get in touch 🚀
Medium: conorosullyds.medium.com/
Threads: www.threads.net/@conorosullyds
Twitter: conorosullyDS
Website: adataodyssey.com/
🚀 Chapters 🚀
00:00 Introduction
00:51 Application with scikit-learn
02:21 Applying PDPs
08:22 Custom ICE Plot
09:48 2D PDPs
10:54 Categorical features
11:47 Binary target variables
Переглядів: 96

Відео

Partial Dependence (PDPs) and Individual Conditional Expectation (ICE) Plots | Intuition and Math
Переглядів 23214 днів тому
Both Partial Dependence (PDPs) and Individual Conditional Expectation (ICE) Plots are used to understand and explain machine learning models. PDPs can tell us if a relationship between a model feature and target variable is linear, non-linear or if there is no relationship. Similarly, ICE plots are used to visualise interactions. Now, at first glance, these plots may look complicated. But you w...
Permutation Feature Importance from Scratch | Explanation & Python Code
Переглядів 37121 день тому
Feature importance scores are a collection of methods all used to answer one question: which machine learning model features have contributed the most to predictions in general? Amongst all these methods, permutation feature importance is the most popular. This is due to it’s intuitive calculation and because it can be applied to any machine learning model. Understanding PFI is also an importan...
Model Agnostic Methods for XAI | Global v.s. Local | Permutation v.s. Surrogate Models
Переглядів 229Місяць тому
Model agnostic method can be used with any model. In Explainable AI (XAI), this means we can use them to interpret models without looking at their interworkings. This gives us a powerful way to interpret and explain complex black-box machine learning models. We will elaborate on this definition. We will also discuss the taxonomy of model agnostic methods for interpretability. They can be classi...
8 Plots for Explaining Linear Regression | Residuals, Weight, Effect & SHAP
Переглядів 633Місяць тому
For data scientists, a regression summary might be all that's needed to understand a linear model. However, when explaining these models to a non-technical audience, it’s crucial to employ more digestible visual explanations. These 8 methods not only make linear regression more accessible but also enrich your analytical storytelling, making your findings resonate with any audience. We understan...
Feature Selection using Hierarchical Clustering | Python Tutorial
Переглядів 974Місяць тому
In this comprehensive Python tutorial, we delve into feature selection for machine learning with hierarchical clustering. We guide you through the essentials of partitioning features into cohesive groups to minimize redundancy in model training. This technique is particularly important as your dataset expands, offering a structured alternative to manual grouping. What you'll learn: - The import...
8 Characteristics of a Good Machine Learning Feature | Predictive, Variety, Interpretability, Ethics
Переглядів 168Місяць тому
Feature selection is hard! So, I explain how you can use a combination of variable clustering and feature importance to help create a shortlist. I will also explain the key factors you need to consider when selecting features. The most important are predictive power and predictor variety. But there are also other considerations including data quality and availability, feature stability, interpr...
Interpretable Feature Engineering | How to Build Intuitive Machine Learning Features
Переглядів 363Місяць тому
There are many ways to capture underlying relationships in your data. Some will be easier to explain as they align with the intuition of your audience. So we should really be doing feature engineering not just for predictability but also for interpretability. We’re going to discuss how to reformulate features with the goal of interpretability. At the same time, we’re going to understand how to ...
Modelling Non-linear Relationships with Regression
Переглядів 3582 місяці тому
This video is an advocacy for linear models. Its goal is to convince you that they should always be your first choice. Especially, if you care about model interpretability. This is because they are easier to explain, widely understood and accepted in many industries. Building them also requires you to think more critically about your problem and data. Most importantly, a well-structured linear ...
Explaining Machine Learning to a Non-technical Audience
Переглядів 4102 місяці тому
An important part of a data scientist’s job is to explain machine learning model predictions. Often, the person receiving the explanation will be non-technical. If you start talking about cost functions, hyperparameters or p-values you will be met with blank stares. We need to translate these technical concepts into layman’s terms. This process can be more challenging than building the model it...
Get more out of Explainable AI (XAI): 10 Tips
Переглядів 4642 місяці тому
Explainable Artificial intelligence (XAI), also known as Interpretable Machine Learning (IML), can explain complex machine learning models. But, the methods are not a golden bullet. You can’t simply fire them at black-box models and expect reasonable explanations for their inner workings. Yet, they can still provide incredible insight if used correctly. So, I give 10 tips for getting the most o...
The 6 Benefits of Explainable AI (XAI) | Improve accuracy, decrease harm and tell better stories
Переглядів 5052 місяці тому
Explainable AI (XAI), also known as interpretable machine learning (IML), can help you understand and explain your model. This has many benefits. It can help decrease harm and increase trust in machine learning. You can also gain knowledge of your dataset and tell better stories about your results. You can even improve the accuracy of your models and performance in production. We will discuss t...
Introduction to Explainable AI (XAI) | Interpretable models, agnostic methods, counterfactuals
Переглядів 1,6 тис.3 місяці тому
Artificial intelligence (AI) and machine learning (ML) impact our lives in many ways. From mundane tasks to critical decision-making processes, AI's role is becoming more central. As a result, the need for transparency and interpretability of these systems is growing. This is why we need to field of Expianable AI (XAI) also known as interpretable machine learning (IML). We will take a brief loo...
Data Science vs Science | Differences & Bridging the Gap
Переглядів 2947 місяців тому
Join us on a journey as we delve into the world of data science and explore its relationship with traditional scientific methodology. In this video, we'll: 1. Uncover the Scientific Process: We break down the scientific method, the gold standard for understanding the natural world. 2. Examine Data Science's Distinctions: Discover where data science diverges from traditional scientific principle...
About the Channel and my Background | ML, XAI and Remote Sensing
Переглядів 7477 місяців тому
Thanks for 1000 subscribers!! Taking this as an opportunity to talk about the future of the channel and my background as a Data Scientist. Excited to share content about machine learning, explainable AI (XAI) and remote sensing. 🚀 See some of my popular videos 🚀 SHAP with Python: ua-cam.com/video/L8_sVRhBDLU/v-deo.html Math behind SHAP: ua-cam.com/video/UJeu29wq7d0/v-deo.html Explaibale vs Inte...
SHAP for Binary and Multiclass Target Variables | Code and Explanations for Classification Problems
Переглядів 7 тис.8 місяців тому
SHAP for Binary and Multiclass Target Variables | Code and Explanations for Classification Problems
Introduction to Algorithm Fairness | Causes, Measuring & Preventing Unfairness in Machine Learning
Переглядів 1,2 тис.8 місяців тому
Introduction to Algorithm Fairness | Causes, Measuring & Preventing Unfairness in Machine Learning
SHAP Violin and Heatmap Plots | Interpretations and New Insights
Переглядів 3,6 тис.9 місяців тому
SHAP Violin and Heatmap Plots | Interpretations and New Insights
Correcting Unfairness in Machine Learning | Pre-processing, In-processing, Post-processing
Переглядів 6149 місяців тому
Correcting Unfairness in Machine Learning | Pre-processing, In-processing, Post-processing
Definitions of Fairness in Machine Learning | Equal Opportunity, Equalized Odds & Disparate Impact
Переглядів 1,9 тис.9 місяців тому
Definitions of Fairness in Machine Learning | Equal Opportunity, Equalized Odds & Disparate Impact
Exploratory Fairness Analysis | Quantifying Unfairness in Data
Переглядів 7189 місяців тому
Exploratory Fairness Analysis | Quantifying Unfairness in Data
5 Reasons for Unfair Models | Proxy Variables, Unbalanced Samples & Negative Feedback Loops
Переглядів 56110 місяців тому
5 Reasons for Unfair Models | Proxy Variables, Unbalanced Samples & Negative Feedback Loops
Feature Engineering with Image Data | Aims, Techniques & Limitations
Переглядів 67811 місяців тому
Feature Engineering with Image Data | Aims, Techniques & Limitations
Image Augmentation for Deep Learning | Benefits, Techniques & Best Practices
Переглядів 976Рік тому
Image Augmentation for Deep Learning | Benefits, Techniques & Best Practices
Interpretable vs Explainable Machine Learning
Переглядів 14 тис.Рік тому
Interpretable vs Explainable Machine Learning
4 Significant Limitations of SHAP
Переглядів 8 тис.Рік тому
4 Significant Limitations of SHAP
Shapley Values for Machine Learning
Переглядів 10 тис.Рік тому
Shapley Values for Machine Learning
The mathematics behind Shapley Values
Переглядів 18 тис.Рік тому
The mathematics behind Shapley Values
SHAP with Python (Code and Explanations)
Переглядів 45 тис.Рік тому
SHAP with Python (Code and Explanations)
SHAP values for beginners | What they mean and their applications
Переглядів 35 тис.Рік тому
SHAP values for beginners | What they mean and their applications

КОМЕНТАРІ

  • @mahsadehghan-ws1kn
    @mahsadehghan-ws1kn День тому

    Thank you so much for this awesome video. When I use this code in the #Train model section, I encounter this error. What is the solution?[17:50:59] C:\buildkite-agent\builds\buildkite-windows-cpu-autoscaling-group-i-0b3782d1791676daf-1\xgboost\xgboost-ci-windows\src\data\array_interface.h:492: Unicode-7 is not supported.

    • @adataodyssey
      @adataodyssey 15 годин тому

      There could be many things going wrong. You can try creating a Python environment and downloading the XGBoost package and only the other ones necessary to train the model.

  • @abdelbaki8625
    @abdelbaki8625 5 днів тому

    what is the article reference for this information i need it for my studies emergency, please

    • @adataodyssey
      @adataodyssey 3 дні тому

      ua-cam.com/users/redirect?event=video_description&redir_token=QUFFLUhqbktFYXFNVHVzc3NsTWpaYkc4Y3l3alZ0N3dmZ3xBQ3Jtc0trX2c3WmlOUVQwYW1USmJsaDh4YnpLV191dk5tOEdnOUtnVF9vZm5BbG8yTmRTaU56RXZNSE12Nkh2MjRITUZSLUZINUNPWmM3WFRlbnVGZWlscDFLZnFOZy1Xb0JiYm1RMnlQbVU2MEJ4R0hoUmJxMA&q=https%3A%2F%2Ftowardsdatascience.com%2Ffrom-shapley-to-shap-understanding-the-math-e7155414213b%3Fsk%3D329a1f042a0167162487f7bb3f0ffd46&v=UJeu29wq7d0

  • @TheCsePower
    @TheCsePower 5 днів тому

    Would be nice if the pdp had some kind of confidence interval that varied with the feature value.

    • @adataodyssey
      @adataodyssey 3 дні тому

      That's a good idea! You might be able to use the std of the prediction around each point. It would be related to the ICE plot where a point would have a larger std if not all the individual lines follow the same trend.

  • @abdelbaki8625
    @abdelbaki8625 6 днів тому

    I don't understand

  • @v-ba
    @v-ba 6 днів тому

    Great explanation, thank you very much

  • @shubhanshisinghms7745
    @shubhanshisinghms7745 7 днів тому

    Can you make a video on how recruitment decision is made?

    • @adataodyssey
      @adataodyssey 3 дні тому

      Do you mean how automated decisions are made or decisions for data scientists in general?

  • @youmustbenewhereguy
    @youmustbenewhereguy 12 днів тому

    how do you know which parameter of image manipulation that will be robust for any data will be faced in the future?

    • @adataodyssey
      @adataodyssey 3 дні тому

      This is a difficult question to answer as it will depend on your problem. In general, you will need a robust dataset that includes images taken under all conditions for which the model is expected to operate. Then you can evaluate the models trained using different feature engineering methods on this dataset.

  • @adataodyssey
    @adataodyssey 18 днів тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @felicebugge
    @felicebugge 22 дні тому

    Really useful , thank you

  • @slimeminem7402
    @slimeminem7402 24 дні тому

    Personally, the distinction is not necessary.

    • @adataodyssey
      @adataodyssey 23 дні тому

      I agree :) But I did think it was important when I first got into XAI.

  • @graziaedumabagraham5346
    @graziaedumabagraham5346 24 дні тому

    Great content.

  • @graziaedumabagraham5346
    @graziaedumabagraham5346 24 дні тому

    Great content

  • @adataodyssey
    @adataodyssey 24 дні тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @seanjohn6956
    @seanjohn6956 26 днів тому

    just stick to the explanations no need for the jarring adlibs

  • @yelancho
    @yelancho 29 днів тому

    Appreciate a lot Prof Odyssey!Shaply values is now a more clear concept in my mind!

    • @adataodyssey
      @adataodyssey 28 днів тому

      Thanks Ye! I'm glad you found it useful :)

  • @adataodyssey
    @adataodyssey 29 днів тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @youtubeuser4878
    @youtubeuser4878 Місяць тому

    Hello. Thanks for the tutorial. Regarding your XAI and SHAP courses, is there an order to how we should take the courses. Should we take the XAI before SHAP or vice versa. Thanks

    • @adataodyssey
      @adataodyssey Місяць тому

      No problem! It is better to take XAI first then SHAP. XAI covers more of the basics in the field and other useful model agnostic methods. But the SHAP course still gives some basics so it is not necessary to do the entire XAI course (or even any of it) if all you care about it learning SHAP :)

    • @youtubeuser4878
      @youtubeuser4878 Місяць тому

      @@adataodysseyAwesome. Thank you.

  • @innocentjoseph9084
    @innocentjoseph9084 Місяць тому

    Excellent explanation, just what I needed. Thank you.

    • @adataodyssey
      @adataodyssey Місяць тому

      I’m glad you found it useful, Innocent :)

  • @shazajmal9695
    @shazajmal9695 Місяць тому

    Thanks Bruh! Great Content! Would be happy if you upload a video comparing Shap with LIME and Integrated Gradients. Its a hot topic rn in data science interviews.

    • @adataodyssey
      @adataodyssey Місяць тому

      Thanks for the suggestion! Would this be w.r.t. computer vision models and deep learning?

  • @franciscoletelier5598
    @franciscoletelier5598 Місяць тому

    Great! always clear

  • @adataodyssey
    @adataodyssey Місяць тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @dantedt3931
    @dantedt3931 Місяць тому

    This is awesome!

  • @kalebomb5018
    @kalebomb5018 Місяць тому

    Thanks for the content on XAI and particularly SHAP, it's given me a good overview before I jump into the details. I have a sci-fi book recommendation for you: Hyperion and The Fall of Hyperion by Dan Simmons =) The first book is told from the perspective of 7 characters as they visit/revisit the planet of Hyperion that they've had dealings with in the past. Hyperion is a fringe planet in the Hegemony of Man, not connected via Farcaster, and thus a visit incurs significant time dilation. On the planet are artefacts from another intelligent force: the Time Tombs, a location with whacky time reversal effects, a 3 meter tall metallic creature covered in spikes known as the Shrike (which also has time manipulation abilities), and more. Identified as the only significant anomaly in the AI faction's predictions, everything seems to be converging on Hyperion as the Time Tombs open... Genuinely incredible read

    • @adataodyssey
      @adataodyssey Місяць тому

      Thanks! I actually just finished a book so this is good timing :)

  • @ShivSingh-zv1xw
    @ShivSingh-zv1xw Місяць тому

    I have recently joined a course on eXplainable Artificial Intelligence (XAI) of yours and I am interested in applying the concepts of interpretability to image data while ensuring that the model's accuracy is preserved. please do create some videos on that topic.

    • @adataodyssey
      @adataodyssey Місяць тому

      You're in luck! The next course I want to create will be XAI for computer vision. So expect to see some content soon.

  • @ShivSingh-zv1xw
    @ShivSingh-zv1xw Місяць тому

    I have recently joined a course on eXplainable Artificial Intelligence (XAI) of yours and I am interested in applying the concepts of interpretability to image data while ensuring that the model's accuracy is preserved. please do create some videos on that topic. Thank you!

  • @karthikeyapervela3230
    @karthikeyapervela3230 Місяць тому

    Thanks, I was recently reading a post in LinkedIn how to eliminate highly correlated features with hierarchical clustering, but that was not clear but this is much better explained.

    • @adataodyssey
      @adataodyssey Місяць тому

      Thanks Karthikeya! I'm glad you found it useful. I have another video coming out tomorrow about explaining linear models.

  • @mulusewwondieyaltaye4937
    @mulusewwondieyaltaye4937 Місяць тому

    I can't access SHAP python course. Could you please give me the access

    • @adataodyssey
      @adataodyssey Місяць тому

      Hi Mulusew, the SHAP course is no longer free. But you will now get free access to my XAI course if you sign up to the newsletter

  • @arjendeniz6828
    @arjendeniz6828 Місяць тому

    Thank you so much! I was stuck at a hierarchical analysis as I did not know that I need to transpose my dataframe. Great video!

    • @adataodyssey
      @adataodyssey Місяць тому

      I’m glad you found this useful ☺️

  • @adataodyssey
    @adataodyssey Місяць тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @rodrigoccamargos
    @rodrigoccamargos Місяць тому

    This is the best channel so far for XAI content. Keep going!

    • @adataodyssey
      @adataodyssey Місяць тому

      Thank you! I appreciate that :D

  • @adataodyssey
    @adataodyssey Місяць тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @adataodyssey
    @adataodyssey Місяць тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @adataodyssey
    @adataodyssey Місяць тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @adataodyssey
    @adataodyssey Місяць тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @rajeshkalakoti2434
    @rajeshkalakoti2434 Місяць тому

    can you give an example of how to plot heatmaps for a PyTorch model?

    • @adataodyssey
      @adataodyssey Місяць тому

      I will keep this in mind. I am planning to do a few tutorials using different packages --- Scikit learn, catboost, pytorch etc...

  • @satk4211
    @satk4211 Місяць тому

    Excellent video ❤❤❤❤❤❤

    • @adataodyssey
      @adataodyssey Місяць тому

      Thank you ☺️ I’m glad it could help

  • @dance__break4155
    @dance__break4155 2 місяці тому

    whats the color of your eyes?

  • @silver_soul98
    @silver_soul98 2 місяці тому

    Bro that was a nice explanation. thanks so much.

    • @adataodyssey
      @adataodyssey 2 місяці тому

      No problem :) I’m glad it was useful

  • @Hoxle-87
    @Hoxle-87 2 місяці тому

    Excellent! You gave me an idea 💡 Great job!

    • @adataodyssey
      @adataodyssey 2 місяці тому

      Thanks Jose! I'm glad I could help

  • @cyberlando
    @cyberlando 2 місяці тому

    Where is the link for the code for the insurance model?

    • @adataodyssey
      @adataodyssey 2 місяці тому

      github.com/a-data-odyssey/XAI-tutorial/blob/main/src/intro/human_friendly_explanations.ipynb

  • @miguelgarciaortegon
    @miguelgarciaortegon 2 місяці тому

    Great explanation, thank you!

    • @adataodyssey
      @adataodyssey 2 місяці тому

      I'm glad you found it useful Miguel :)

  • @umarkhan-hu7yt
    @umarkhan-hu7yt 2 місяці тому

    Dear Odyssey you are doing great. Keep continue and hit hard on all XAI models for a layman.

  • @jav65
    @jav65 2 місяці тому

    Why do I get a feeling that you are an AI?

    • @adataodyssey
      @adataodyssey 2 місяці тому

      haha I'm still learning to be natural on camera :) be kind!

    • @jav65
      @jav65 2 місяці тому

      I was just kidding... you have done an excellent presentation of the topic, thank you so much for this video

    • @adataodyssey
      @adataodyssey 2 місяці тому

      @@jav65 No problem Jav! More on the way :)

  • @HarrisBaig
    @HarrisBaig 2 місяці тому

    give this man a gold medal

    • @adataodyssey
      @adataodyssey 2 місяці тому

      Thank you Harris. I take it you enjoyed the video :)

  • @arelyayala4409
    @arelyayala4409 2 місяці тому

    I just found your channel after reading your posts on Medium. I am currently working with satellite images, and I want to express my admiration for your excellent work. I have a question regarding the method you present in the article "Removing Clouds from Landsat Satellite Images with Python." Would it be feasible to implement it using Google Earth Engine (GEE) for image acquisition and other related processes?

  • @shotclock5424
    @shotclock5424 2 місяці тому

    This is the best way to explain explanations 😁 I am interested to see a video of yours with more complex models like Deep Neural Networks on Signal Data and how can we use SHAP on that. Great work!

    • @adataodyssey
      @adataodyssey 2 місяці тому

      Thank you! I will keep that in mind

  • @ogmarekianko6072
    @ogmarekianko6072 2 місяці тому

    Excellent content, keep it up!

  • @fouried96
    @fouried96 2 місяці тому

    Love to see a fellow South African in this line of work!

    • @adataodyssey
      @adataodyssey 2 місяці тому

      Howzit! Will keep the videos coming :)

    • @fouried96
      @fouried96 2 місяці тому

      @@adataodyssey Sweet! I followed you on linkedin for any other posts outside of UA-cam. I was just curious, how does Ireland's grading system work for masters, I see you have a 1.1. I have no idea what that means having only studied in SA lol :P

    • @adataodyssey
      @adataodyssey 2 місяці тому

      @@fouried96 that's 75% or above. They don't distinguish beyond that. The Irish are not so big on grading :D

    • @fouried96
      @fouried96 2 місяці тому

      @@adataodysseyCongrats! I am busy following this SHAP series. I'm looking to find the best features for this kaggle comp for a multiclass classification problem where I'm using XGBoost. I was wondering, are you on Kaggle?

  • @adataodyssey
    @adataodyssey 2 місяці тому

    🚀 Free Course 🚀 Signup here: mailchi.mp/40909011987b/signup XAI course: adataodyssey.com/courses/xai-with-python/ SHAP course: adataodyssey.com/courses/shap-with-python/

  • @MrKeastman
    @MrKeastman 2 місяці тому

    Love this topic. Subscribed. I build evaluation frameworks for ML models in industry. Most ML scientists I deal with are trying to replicate the "science" they see in academic ML research. Explaining how it works isn't that much of a priority by stakeholders or the scientists until something goes wrong. Hence people's acceptance of a "black box". Saying to both groups that they should be more "scientific" will probably not go that well, even if you explain to them what "true" science is. Part of the reason is that the word "science" itself is at this point pretty stigmatized due to fraud and replication issues across science (good channel -> www.youtube.com/@PeteJudo1). I find fear-mongering more effective + showing an outline of a robust framework looks like.

    • @adataodyssey
      @adataodyssey 2 місяці тому

      Thanks Kyler! By fear-mongering do you mean "we must follow best principles else we will lose a lot of money!"?