The Math Behind Bayesian Classifiers Clearly Explained!

Поділитися
Вставка
  • Опубліковано 3 лип 2024
  • In this video, I've explained the math behind Bayes classifiers with an example. I've also covered the Naive Bayes model.
    #machinelearning #datascience
    For more videos please subscribe -
    bit.ly/normalizedNERD
    Support me if you can ❤️
    www.paypal.com/paypalme2/suji04
    www.buymeacoffee.com/normaliz...
    The math behind GANs -
    • The Math Behind Genera...
    Source code -
    github.com/Suji04/NormalizedN...
    3blue1brown -
    / @3blue1brown
    Facebook -
    / nerdywits
    Instagram -
    / normalizednerd
    Twitter -
    / normalized_nerd

КОМЕНТАРІ • 83

  • @BrianAmedee
    @BrianAmedee 3 роки тому +48

    'Clearly Explained' - and it actually was. Thanks man

  • @hayleyH997
    @hayleyH997 2 місяці тому +1

    How he manage to explain something that a 1-hr lecture couldn't! Thanks mate

  • @pradyumnabada5118
    @pradyumnabada5118 Рік тому +9

    Dude.. I lost count of the videos I watched to understand this but lastly, after seeing your video the struggle ended. Thank you so much!

  • @jaster_mereel7657
    @jaster_mereel7657 3 роки тому +28

    This was a very clear explanation indeed. Thank you!

  • @bluestar2253
    @bluestar2253 3 роки тому +20

    One of the best explanations I've ever seen!

  • @guangruli4486
    @guangruli4486 2 роки тому +3

    Very clearly explained, thank you!

  • @sye9522
    @sye9522 2 місяці тому

    HUGE thanks for perfectly delivering the whole concept in one video bro!!

  • @sopegue
    @sopegue 2 роки тому

    It was clearly explained as mentionned in the title. Thanks a bunch !!!

  • @jefersondavidgalloaristiza3410
    @jefersondavidgalloaristiza3410 7 місяців тому

    Very nice explanation and perfect illustrations!!

  • @EduAidClassroom
    @EduAidClassroom 2 роки тому +2

    LOVED IT!!!
    Awesome Explanation! Can't thank you enough...

  • @parisaghanad8042
    @parisaghanad8042 2 роки тому

    That was great! I'm really glad that I found your channel. Thanks a lot 👍👍

  • @dannysammy8972
    @dannysammy8972 2 роки тому +2

    Yes, this was actually well explained. Thank you :)

  • @leolei9352
    @leolei9352 2 роки тому

    Very clear explanation!

  • @sobana653
    @sobana653 Рік тому

    Nicely explained!

  • @nikolai228
    @nikolai228 4 місяці тому

    Amazing video. thanks.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    This is really well explained.

  • @sayonsom
    @sayonsom Рік тому +1

    Great explanation :)

  • @thebjjtroll6778
    @thebjjtroll6778 2 роки тому

    Amazing teaching skills

  • @swethanandyala
    @swethanandyala 2 роки тому

    very nice explanation thank you so much

  • @aurorasart9458
    @aurorasart9458 2 роки тому +1

    Thank you very much for your work! Nice explanation!

  • @atulyadav9712
    @atulyadav9712 2 роки тому

    Great explanation

  • @dzmitryk9658
    @dzmitryk9658 2 роки тому

    Awesome! Thank you.

  • @vojinivkovic9533
    @vojinivkovic9533 2 роки тому

    great explanation

  • @arielalvarez88
    @arielalvarez88 3 роки тому +1

    Really good work, congrats

  • @fmt2586
    @fmt2586 2 роки тому

    hey, thanks man, very clear explanation.😀😀

  • @dpaul3447
    @dpaul3447 Рік тому +1

    Thank you so much man!!

  • @hasben0
    @hasben0 Рік тому

    Well done👊👊

  • @mehditavakoli2492
    @mehditavakoli2492 Рік тому

    Thank you!

  • @daniilsukhov3068
    @daniilsukhov3068 3 роки тому

    bro, best explanation I could find

  • @miusukamadoto6805
    @miusukamadoto6805 Рік тому

    Thank you very much for the video. Clearly explained indeed, the only part I couldn't get completely was the discretization.

  • @telusukondifirstuu9221
    @telusukondifirstuu9221 2 роки тому

    I love this Exolaination 😍🥰😘
    Thanks a lot ❤

  • @adityaprasad3356
    @adityaprasad3356 Рік тому +1

    very helpful🥺🥺

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    Great explanation.

  • @SarahGhiyasi
    @SarahGhiyasi Рік тому

    Thank u it was great.

  • @sumedha1051
    @sumedha1051 Рік тому

    love this!

  • @imadeit6587
    @imadeit6587 3 роки тому +1

    I am appreciate your work

  • @MrDaniel560
    @MrDaniel560 Рік тому

    HELPFUL!!!!

  • @aditya.singh9
    @aditya.singh9 3 роки тому

    truly amazing

  • @nickgannon7466
    @nickgannon7466 2 роки тому

    well done

  • @AnasHawasli
    @AnasHawasli 4 місяці тому

    Great video man great
    herre is a sub

  • @radoyapanic998
    @radoyapanic998 2 роки тому +1

    In the last part of the video you said we can fit a known distribution to a continuous set of data. However, you continued to then write that the probabilities can be calculated by taking the product of the pdf evaluated at different values of the feature and label. The pdf does not provide probabilities however, as it needs to be integrated to inform one of the probabilities of an event. This part of the video seems imprecise.
    However, the video in general was great. Thanks.

  • @xiaoyongguo1133
    @xiaoyongguo1133 5 місяців тому

    9:37 you made conclusion based on P(X=[0,2] | Y), I think the correct way is to calculate P(Y|X=[0,2]). In case P(Y=1) is very small, the answer can be Y=0.

  • @DANstudiosable
    @DANstudiosable 4 роки тому +1

    Well explained, a quick revision for Naive bayes. I forgot why it was called Naive until i watched this video 😂😂

  • @zouhir2010
    @zouhir2010 3 роки тому

    thumbs up
    thanks

  • @lucasqwert1
    @lucasqwert1 10 місяців тому

    in the last part at minute 11: What is the function f to fit a known distribution? Thank you for answering!

  • @muhammadzubairbaloch3224
    @muhammadzubairbaloch3224 4 роки тому +3

    sir please more lectures.
    I am seeing after too days later your lectures
    made some advance NLP and CV lectures or AI lectures thanks

    • @NormalizedNerd
      @NormalizedNerd  4 роки тому +1

      I will try my best to upload more frequently.

  • @sayantansadhu6380
    @sayantansadhu6380 4 роки тому +2

    It was like a revision for class 12 probability 😁😁

  • @quanghuynh1570
    @quanghuynh1570 11 місяців тому

    you saved me

  • @mahirjain8898
    @mahirjain8898 7 місяців тому +1

    so goood

  • @PritishMishra
    @PritishMishra 3 роки тому +1

    If I search for any ML Algorithm I just first check your channel If you have created the video on the same... You are my first preference for ML/DL Algo Explanation. Just a request please make a video on Deep Learning Algorithm too like CNN, RNN & LSTM "from scratch". It will really help people who want to become practitioners in AI like me.

    • @NormalizedNerd
      @NormalizedNerd  3 роки тому +1

      Thank you so much ❤
      Writing CNNs and RNNs from scratch are pretty hectic...maybe some day I'll try.

    • @PritishMishra
      @PritishMishra 3 роки тому

      @@NormalizedNerd Waiting... you are our only hope who can teach us Mathematics of ML with cool animation, That's why requested you! Thanks.

  • @high_fly_bird
    @high_fly_bird Рік тому

    The explanation is so cool! But it would be even cooler if you added some examples with continious features and fitting a distribution, this part wasn't so clear...

  • @aymericalixe1310
    @aymericalixe1310 3 роки тому +4

    Maybe i'm wrong but I think the hypothesis is not that X1 and X2 are independant but that X1 and X2 are conditionnaly independant. It was very clear otherwise thank you !

    • @NormalizedNerd
      @NormalizedNerd  3 роки тому

      In naive Bayes every feature is treated as an independent feature that's why it's called naive.

    • @chitranghosal879
      @chitranghosal879 11 місяців тому

      I think the hypothesis is that you assume each feature to be (w.r.t other features)
      1) globally independent (in the global sample space)
      2)conditionally independent w.r.t the occurence of each class label (under the subset sample space where the particular class event has occured)
      If these assumptions are not met, then it does not seem possible to build the mathematics, because as far as I see,
      if events A and B are independent, that does not naturally imply conditional independence between events (A|C) and (B|C)

  • @Fuktron13
    @Fuktron13 2 роки тому

    I wish you were my professor

  • @kunalsoni7681
    @kunalsoni7681 10 місяців тому

    Nice ⭐⭐⭐⭐⭐

  • @signature445
    @signature445 3 роки тому +1

    Sir is this like
    Bayesian classifier deals with conditional probability ?
    Naïve bays classifier deals with joint probability ?
    Thanks in advance.....

  • @Ilham-lj3me
    @Ilham-lj3me Рік тому

    and how aboit gaussian NB?

  • @plumSlayer
    @plumSlayer Рік тому

    You areee Amazing. I love your Indian Bengali accent ( just a guess hehe make me a voice analyzer if i am right XD
    )

  • @harshitdtu7479
    @harshitdtu7479 2 місяці тому

    10:37

  • @10xGarden
    @10xGarden 4 роки тому +1

    3b1b's bro is here

  • @pushandeb187
    @pushandeb187 9 місяців тому

    liked that

  • @abdulkarim.jamal.kanaan
    @abdulkarim.jamal.kanaan 3 роки тому

    Hello people from the future! :D

  • @anon_148
    @anon_148 2 роки тому

    independant moment

  • @mahedihassanrafin7493
    @mahedihassanrafin7493 9 місяців тому

    just quit confusing people

  • @davidmurphy563
    @davidmurphy563 8 місяців тому

    Ok, I've given up on the video after 45 secs. You said "stated clearly", if you hadn't I'd have kept watching.
    You point to an array of features called X. What are they? Are they features of the array itself (its size / rank / dimension?), are they features of the thing the array of describing (measurements in a house?), or a list of possible attributes (the ingredients on a pizza?) Then you introduce a label. So what, is this like a python dictionary?
    Plus, I've no idea what sort of issue we're supposed to be tackling? Is it probability? Is it rationality with limited knowledge? I only guess that because I've heard of Bayes before.
    Instead you launch into calculations when I have not the first idea what you're calculating. Why would I listen to that?
    Tell you what, I'll give it another 30 secs. If there's no illustrative example / clear explanation of what the hell you're covering I'm gone.

    • @davidmurphy563
      @davidmurphy563 8 місяців тому

      Nope, 30 secs later and it's absolute horseshit.