Explainable AI explained! | #1 Introduction

Поділитися
Вставка
  • Опубліковано 27 вер 2024

КОМЕНТАРІ • 25

  • @silviaz.3887
    @silviaz.3887 3 роки тому +6

    This is a great series. Eye opening indeed. Thanks. Look forward to new videos on other ML subjects.

  • @sohailpatel7549
    @sohailpatel7549 11 місяців тому

    Thanks for this video and playlist. It's really hard to get this info in such a simple manner.

  • @phurbayenden7698
    @phurbayenden7698 2 роки тому +1

    such a great video! thank you for condensing it and also pointing out to directions where we can venture out.

  • @ninjanape
    @ninjanape Рік тому

    Thank you very much for making this playlist - absolute chamption!

  • @thegimel
    @thegimel 3 роки тому

    Great video on a very important subject. Thanks!

  • @Anirudh-cf3oc
    @Anirudh-cf3oc 2 роки тому

    Thank you for the amazing explaination

  • @chenqu773
    @chenqu773 3 роки тому

    This is a great tutorial! Many thanks!

  • @ongsooliew
    @ongsooliew 8 місяців тому

    Good Stuff!

  • @rottaca
    @rottaca 3 роки тому

    Amazing 😍

  • @rajdeepdas283
    @rajdeepdas283 3 роки тому

    woowwwww!!

  • @touhidulislamchayan6896
    @touhidulislamchayan6896 3 роки тому

    Helpful

  • @tilkesh
    @tilkesh Рік тому

    Thx

  • @HarrisBaig
    @HarrisBaig Рік тому

    @3:00 you talk about the psychological area about what makes good explanations. I am thinking to do my thesis around this topic Human Centered AI or user friendly AI. Can you direct me to some sources which helps me to understand these topics?

  • @البداية-ذ1ذ
    @البداية-ذ1ذ 3 роки тому

    Hi iam not familiar with this topic ,it is quite new for me ,but i like your way in presenting as usual,my question is this field normaly ml data science should know to anlysis the model .or this field could person work separately to just test and anlysis the module

    • @DeepFindr
      @DeepFindr  3 роки тому

      Hi and thanks! It depends :) in my experience the person who build a model also works on interpreting it. This is to make sure the model works as intended - basically its a verification step after developing the algorithm. But I can imagine that also other people work on the XAI tasks especially in larger companies when larger models are used

    • @DeepFindr
      @DeepFindr  3 роки тому +1

      By the way: more and more companies and frameworks also started to include interpretability tools. E. G. Tensorflow has the what-if tool or Google Has interpeetable AI for Google Cloud.

    • @DeepFindr
      @DeepFindr  3 роки тому +1

      cloud.google.com/explainable-ai here for example

  • @rivershi8273
    @rivershi8273 2 роки тому

    How can I get in touch with you to ask you some questions

    • @DeepFindr
      @DeepFindr  2 роки тому

      deepfindr@gmail.com :)

    • @rivershi8273
      @rivershi8273 2 роки тому

      @@DeepFindr Thanks, I' ve sent my problems to your email

  • @keshav1989
    @keshav1989 3 роки тому +1

    Hi this series is extraordinarily good. Very well we explained. Thank you.

  • @martinwohlan4891
    @martinwohlan4891 2 роки тому

    Hi, first of all thanks for the nice video! I have a question: Is the base value given by the shap explainer the appearance of a certain class (stroke in this case) in the data on which model was trained on?

    • @DeepFindr
      @DeepFindr  2 роки тому

      Hi! Thanks :)
      Yes exactely, the base value is calculated as average of the train dataset, as far as I know.

  • @dheerajvarma4821
    @dheerajvarma4821 2 роки тому

    Great explanation & Great series

  • @geoffbenstead6461
    @geoffbenstead6461 3 роки тому

    Great intro and series to XAI - thanks