Explainable AI explained! | #2 By-design interpretable models with Microsofts InterpretML

Поділитися
Вставка
  • Опубліковано 27 вер 2024

КОМЕНТАРІ • 30

  • @nintishia
    @nintishia 3 роки тому +8

    The notion of interpretable models is explained well. Thanks for demonstrating InterpretML as well as for introducing EBMs.

    • @DeepFindr
      @DeepFindr  3 роки тому

      Thanks for the feedback!

  • @leksandradubovik6291
    @leksandradubovik6291 Рік тому +2

    these series are genius, hope you land as professor one day

  • @jeffnc
    @jeffnc 2 роки тому +1

    Thanks so much for this series, it has been very informative and I like all of the examples and code you go through. I've now subscribed :)

    • @DeepFindr
      @DeepFindr  2 роки тому

      Happy that you liked it! Thanks for the sub!

  • @qiguosun129
    @qiguosun129 2 роки тому +7

    Great lectures providing intuition for my research methodology. Thank your so much

  • @wexwexexort
    @wexwexexort 6 місяців тому +1

    Job with children, lol.

  • @heejuneAhn
    @heejuneAhn 11 місяців тому

    logistic regression function is 1/ ( 1 + exp(-y)) not 1/ (1+ exp(+y)) and the impact is df(y)/dx is not simply exp(beta)

  • @dinushachathuranga7657
    @dinushachathuranga7657 6 місяців тому

    Thanks a lot❤

  • @mfaramarzi8333
    @mfaramarzi8333 Рік тому

    Thanks for this very useful series on explainable AI. Is there a way to extract the scores as an array?

  • @雷奇-w7b
    @雷奇-w7b 2 роки тому +1

    The sigmoid function is wrong. It should be (1/1+e^(-y)). Maybe the "-" is lost in this video.

    • @DeepFindr
      @DeepFindr  2 роки тому +1

      Yes you are right! It got lost :) thanks for pointing out!

  • @beautyisinmind2163
    @beautyisinmind2163 Рік тому

    What is mean absolute score in overall feature importance???

  • @НикитаБуров-ъ6р
    @НикитаБуров-ъ6р 6 місяців тому

    what a magic '# %% cell' thank you!

  • @ehsannamjoo8837
    @ehsannamjoo8837 9 місяців тому

    thanks for the great videos. Please add requirements.txt or a readme file to the repository. I need the Python and package versions

  • @felixkraus1219
    @felixkraus1219 2 роки тому

    In the logistic regression example, the dummy variable trap must be avoided. One would have to take k-1, not k dummies per category (with k categorical values) into the model. Or is this handled automatically in interpret.glassbox?

  • @danli1702
    @danli1702 11 місяців тому

    How is logistic regression machine learning? isn't it just old school statistical fitting? 😁

    • @DeepFindr
      @DeepFindr  11 місяців тому

      Well a neural network is also just a bunch of logistic regressions ;-) but yeah you're right

  • @marya_haji
    @marya_haji 3 роки тому

    Can we access to your codes which are in videos?

    • @DeepFindr
      @DeepFindr  3 роки тому +3

      Yes :) I've uploaded them on Github

  • @البداية-ذ1ذ
    @البداية-ذ1ذ 3 роки тому

    Hello ,i have wondering between overall all importance mean feature ,i can do that by using features correlate matrix to figure out the realtion between results and input .i found both equal ,correct me if iam wronge please

    • @DeepFindr
      @DeepFindr  3 роки тому +1

      Hi :) yes that's true for some cases. But classical correlations only capture linear relationships. But what if there is a non-linear relationship between an input and the output.
      But you are right - correlation can also be seen as a simple sort of explainability.
      Also you have to distinguish between what is in the data (we use correlation for that) and what does our model learn (XAI). So we use explainable AI rather for model evaluation.
      And finally if you have more complex input such as images, text or graphs - calculating correlations doesn't necessarily make sense (for example each Pixel value with the output).
      I hope that answers the question :)

    • @البداية-ذ1ذ
      @البداية-ذ1ذ 3 роки тому

      amazing what you highlight,

  • @Aften_ved
    @Aften_ved 2 місяці тому

    1:00

  • @allaalzoy2010a
    @allaalzoy2010a 2 роки тому +1

    Thanks for your video. Can we assume that the final prediction 0 or 1 is the sum of the values of the orange and blue bars? Also, what does it mean if the values of the negative bars are equal to those of the blue bars? My guess is that the model for that instance is confused and its prediction is random. Is that sound correct? Thanks

  • @sRahsa
    @sRahsa Рік тому

    if we add xJ by one unit, f(y) will be multiplied by exp(betaJ), if I'm not wrong.
    My impression from the video was it'll be added, then consulted the book and did the math and saw the multiplications.

  • @nimsarafernando8084
    @nimsarafernando8084 Рік тому

    🎯 Key Takeaways for quick navigation:
    00:00 🎥 Introduction to Explainable AI (XAI) and its significance in understanding machine learning models.
    01:12 📈 Growing interest in Explainable AI due to the need for transparency, validation, and user experience.
    02:09 ⚖️ Trade-off between complex non-linear models and interpretable models; the challenge of understanding neural networks.
    03:33 🧠 Model-agnostic vs. model-specific methods; distinction between global and local explanations.
    04:56 📊 Different types of explanations: feature importance, correlation plots, data points, surrogates.
    05:37 📚 Upcoming topics in the series: interpretable machine learning models, LIME, CHAP, counterfactual explanations, and layer-wise relevance propagation.
    06:46 📺 Independence of video topics, introduction to a practical example, and mention of causal reasoning in the context of counterfactuals.
    Made with HARPA AI

  • @zeno.x-edit02
    @zeno.x-edit02 Рік тому

    Can we use EBM for multi-class classification?