Naïve Bayes Classifier - Fun and Easy Machine Learning

Поділитися
Вставка
  • Опубліковано 9 січ 2025

КОМЕНТАРІ • 225

  • @Augmented_AI
    @Augmented_AI  3 роки тому +4

    ⭐ If you enjoy my work, Id really appreciate a Coffee😎☕ - augmentedstartups.info/BuyMeCoffee

    • @sunway1374
      @sunway1374 2 роки тому

      5:41 Hi. I understand the Bayes formula leads to the final two numbers in green. But can we explain why the sum of the two probability is not equal to 1? Is there a non-zero intersection of P(yes|X) and P(no|X)? If yes, what does it even mean? Thanks!

  • @christopherchan5357
    @christopherchan5357 5 років тому +2

    My professor took several hours to talk about but no idea what he was talking about, I just watched your video just 12 minutes, I fully comprehended. Thank you for guiding how to do my assignment, I was struggling until I watched your video.

  • @teckyify
    @teckyify 5 років тому +9

    The most important point for NB is that it can be trained incrementally as new evidence comes in. That is a giant drawback in other classifiers in which you have to retrained based on the whole data-set.

  • @apoorvasrini2196
    @apoorvasrini2196 6 років тому +8

    This video was soooo sooo useful to me. I was breaking my head over a bad video from my university course and after watching this it all became soo simple. Keep up the good work!!

    • @Augmented_AI
      @Augmented_AI  6 років тому

      Thank you so much, it means. Lot to me :). I really appreciate it

  • @groovytau
    @groovytau 6 років тому +1

    Nice video, apparently i understand it better from you than from my teacher, the fact that you use illustrations helps me a lot to visualize the idea and better understand how it works

  • @GauravKumar-vu9td
    @GauravKumar-vu9td 6 років тому +1

    Watched for 20 seconds and I knew I had to subscribe immediately if at all i wanted to increase my knowledge! Thanks man! Fantastic video for scums like me who find it hard to understand by reading text book

  • @ankitshah008
    @ankitshah008 5 років тому +4

    Lot of efforts have been put to create such a nice explanatory video. Thanks a lot for creating such easy to understand video.

    • @Augmented_AI
      @Augmented_AI  5 років тому

      I'm really grateful for your comment☺️ thank you so much.

  • @syedshahab8471
    @syedshahab8471 3 роки тому +1

    What an amazing video. If Education system is to changed i would very much like it to become like this.
    Enjoyed every second of it. Thanks

  • @uzKantHarrison
    @uzKantHarrison 5 років тому +1

    Not my favorite type of educational video, but I still liked it because it was extremely easy to understand and quite informative.

  • @usscork
    @usscork 6 років тому +133

    Good tutorial, but I'm fairly sure you made a mistake when calculating P(X) to Normalize.
    The Value should have been the sum of you initial two equations....0.0053+0.0206 = 0.0259
    Then dividing 0.0053/0.0259 = 20.5% for Play = Yes
    against 0.0206/0.0259 = 79.5% for Play = No
    and these probabilities, collectively adding up to 100% or 1
    In your example, you have the probabilities 0.2424 + 0.9421 which is >1 and is just wrong.
    Otherwise, as I said... a good and easy to follow tutorial... so thank you.

    • @ishansoni9819
      @ishansoni9819 6 років тому +6

      This should be a pinned comment :). The normalisation isn't done correctly in this tutorial.

    • @PierLim
      @PierLim 6 років тому +7

      Thanks for this correction. Actually, there isn't a need here to calculate the denominator as we are classifying. 0.0206 > 0.0053 shows already that we should not play the game. I suppose it is for completeness. I agree, nicely done tutorial, great production values.

    • @barcode628
      @barcode628 6 років тому +2

      It should definitely be a pinned comment, there should even be an annotation to the video or something of that sort... I did it the way he does on my assignment paper and didn't get any points for the exercise for exactly that reason. On the other hand: Now I certainly won't make the same mistake in the exam.

    • @werbungschrott4050
      @werbungschrott4050 6 років тому

      Thanks!

    • @barcode628
      @barcode628 6 років тому +1

      +AYUSH RASTOGI Dividing by P(X) (also often referred to as "evidence") is meant to normalize the values. So yes, after normalization they have to add up to 1. It is true that this normalization can be ignored if it is a constant and if you only want to classify an observation, but seeing as that is not how the 'Naïve Bayes Classifier' originally works and that this is a teaching video, he should probably apply the algorithm correctly.
      Also, instead of just not dividing by the evidence at all (which is - as said - what some people do to avoid unnecessary effort), he uses a completely wrong value for P(X). So I guess it's safe to say, that this is actually a mistake and not just "saving time".

  • @asadulhaqmshani4737
    @asadulhaqmshani4737 5 років тому +3

    To beginners (like myself), I suggest you watch this video several times if you don't understand it at first. And also learn about this concept from another source and then come back to this video, it will help you understand more.
    Anyway, this is a great video, thanks!

  • @wajay2006
    @wajay2006 7 років тому

    Best Tutorial on Naïve Bayes . Easy to Understand.

  • @lizravenwood5317
    @lizravenwood5317 4 роки тому +2

    This is the BEST explanation of NB I've ever seen.

    • @Augmented_AI
      @Augmented_AI  4 роки тому +1

      Thank you so much. I really appreciate it 😊

  • @FlvckoJr
    @FlvckoJr 4 роки тому +2

    wooow, you literally rescued my life 😂😂😂 THANK YOUU SOO MUCH SIR

  • @krishnakanjee6239
    @krishnakanjee6239 7 років тому +7

    Such an awesome video! You made it look so easy. And your video itself is fun to watch. Thanks!

  • @tajrianbintajahid7561
    @tajrianbintajahid7561 5 років тому

    Best video ever for naive Bayes

  • @kundansahuji
    @kundansahuji 6 років тому

    Awesome video.Finely explained using numerical

  • @nishkaarora6343
    @nishkaarora6343 5 років тому +1

    this is straght fire i love this video this is how all of ML should be taught kudos

  • @ephremtadesse3195
    @ephremtadesse3195 7 років тому +12

    it's nice tutorial , just you made it easy to quickly grasp the idea. Thank you!

    • @Augmented_AI
      @Augmented_AI  7 років тому

      +Ephrem Tadesse thank you, I'm glad it was easy to grap. I appreciate it. :)

  • @jamesturban6944
    @jamesturban6944 4 роки тому

    Very great video, this guy is amazing for machine learning

  • @saqibcs
    @saqibcs 5 років тому

    Thank you man. Just watched before exam

  • @kaviramsamy3708
    @kaviramsamy3708 7 років тому

    Best tutorial I've seen for Naïve Bayes. Thanks

  • @SamuelLawson
    @SamuelLawson 5 років тому +1

    Funny thing - I used a Naive Bayes library in Python that attempts to guess whether a statement is positive or negative and gave it two very similar sentences:
    "That is a dog."
    "That is a cat"
    The sentence with 'dog' came back as 67% positive while the sentence with 'cat' was reported as 58% negative
    It seems Thomas Bayes preferred dogs! :-D

  • @haroldfelipezuluagagrisale3875
    @haroldfelipezuluagagrisale3875 4 роки тому

    Great channel for educational videos, the best, very interesting !!

  • @kitsadda
    @kitsadda 7 років тому

    Clearly Explained. Looking for more such videos

    • @Augmented_AI
      @Augmented_AI  7 років тому

      +Gopala Krishna thanks Gopala, I will be uploading every week . Please subscribe to see more =)

  • @weicao4101
    @weicao4101 5 років тому

    Respect from China. This tutorial is more useful than my professor did a whole hour!

  • @relaxingminds3530
    @relaxingminds3530 5 років тому +1

    really to good and very easy way to teach thank you so much

  • @azula203
    @azula203 7 років тому

    Great work, easy to understand, and kept me interested throughout the video

  • @TheExcessivemhz
    @TheExcessivemhz 6 років тому +1

    Indeed it was easy and fun to learn ML. Thanks !

  • @slava-keshkov
    @slava-keshkov 6 років тому +1

    Hi 7:40: 'We can view the probability that we play golf, given that it is sunny P ( Yes | Sunny) equals the probability that we play golf given a yes P (Sunny | Yes) times the probability of it being sunny P (Sunny) divided by it being a yes P (Yes). Given the theorem, shouldn't it be that P(Yes|Sunny) = P(Sunny|Yes) * P(Yes) / P(Sunny)?

  • @renammartinez
    @renammartinez 4 роки тому +1

    I assume I need a lot of prior knowledge, because I really didn't understand a thing (since I'm looking this up for college, I assume it's downhill from here). Tips on where to begin are appreciated.

  • @dr.mehdipoorsorkh752
    @dr.mehdipoorsorkh752 6 років тому

    Fantastic presentation . Many thanks.

  • @yhx89757
    @yhx89757 7 років тому

    Awesome! Super easy to understand. Thanks for making this video!

  • @linwyatt9302
    @linwyatt9302 7 років тому

    This is the best tutorial I've seen in UA-cam. Please keep uploading new videos!

    • @Augmented_AI
      @Augmented_AI  7 років тому

      +Lin Wyatt I'm glad you enjoying it :) will keep uploading

  • @mingchengao
    @mingchengao 6 років тому +20

    NIce video. But I have a question. Shouldn't P(Play=Yes|X)+P(Play=No|X) be 1?

    • @Joseph_Beck
      @Joseph_Beck 6 років тому +6

      Yes, it should. There is a mistake in the video on that step.

    • @Jack-dx7qb
      @Jack-dx7qb 6 років тому +1

      totally agree

    • @krutznutz1215
      @krutznutz1215 6 років тому

      I was wondering why it was not 1....

    • @dstwo6539
      @dstwo6539 6 років тому +2

      Theoretically it should, but in this case not necessary, given that the assumption is each X features is independent when it is often not the case, in other words, the statement of P(X1, X2, ...Xn| y) = P(X1|y) * P(X2|y) * .... * P(Xn|y) is usually not true. Your P(y|X1, X2,...Xn) + P(y'|X1,X2,...,Xn) = 1 only holds when you DON'T break P(X1, X2, ...Xn| y) into P(X1|y) * P(X2|y) * .... * P(Xn|y) when applying Bayes Theorem, however Naive Bayes assumes this is true, in the case this is not true(most of the time), P(y|X1, X2,...Xn) + P(y'|X1,X2,...,Xn) = 1 doesn't not hold anymore. So there's really nothing wrong in the video.

    • @saadahmad485
      @saadahmad485 5 років тому

      @@dstwo6539 Nice explanation thanks !

  • @benphua
    @benphua 5 років тому +2

    Thank you so much for this amazing video!

    • @Augmented_AI
      @Augmented_AI  5 років тому +1

      I'm glad that I could help 😊.

  • @mailanbazhagan
    @mailanbazhagan 7 років тому +1

    That was an awesome learning experience with your stuff.... : )

    • @Augmented_AI
      @Augmented_AI  7 років тому

      +M Anbazhagan I'm really glad you enjoyed it :). You can check out my playlist for other fun and easy machine learning tutorials.

  • @rodrik1
    @rodrik1 6 років тому

    very nice explanation! Thank you so much for the video

    • @Augmented_AI
      @Augmented_AI  6 років тому

      Thank you, I'm really glad you enjoyed it :)

  • @techwellness6142
    @techwellness6142 5 років тому

    At 3:30 why are you taking only the 5 probability conditions for Play=Yes .? for example why didnt you take P(Outlook=Overcast | Play= Yes)=4/9 . Please help

  • @TheVivek1978
    @TheVivek1978 7 років тому

    Excellent explanations with examples, pros/cons and applicability ! Covered it all !

    • @Augmented_AI
      @Augmented_AI  7 років тому

      +Vivek Kumaresan thank you so much , I really appreciate it :)

  • @rampetajraji
    @rampetajraji 7 років тому +3

    Awesome Tutorial.. this video made my day!

  • @udal100
    @udal100 6 років тому

    youtube.s best Naive Bayes expaination video...love it

    • @Augmented_AI
      @Augmented_AI  6 років тому

      Thank you, I really appreciate the comment :)

  • @kodieswaria528
    @kodieswaria528 7 років тому

    Excellent Explanation.Thank u.

  • @lexispanks7189
    @lexispanks7189 4 роки тому

    Correct me if I'm wrong but the C in your function could be confused with the classes 'yes' and 'no' I've seen some other examples that use C to denominate yes and no.

  • @mkgamesartvisuals
    @mkgamesartvisuals 7 років тому

    Your paintings are super cool!

  • @duongminhnguyet7308
    @duongminhnguyet7308 3 роки тому

    Why do you choose Outlook = "Sunny", Temperature = "Cool", Humidity = "High", Wind = "Strong" when there is no such row in the table or is it a golfing condition?

  • @NKG_Creations
    @NKG_Creations 5 років тому +1

    Excellent, well explained thank you sir.

  • @pascalbercker7487
    @pascalbercker7487 3 роки тому +1

    Great video - but I would tone the music down just a tad - but the content is superb!

  • @danielbassett7933
    @danielbassett7933 5 років тому

    Brilliant video!

  • @EmapMe
    @EmapMe 5 років тому +1

    Wow so fun video!

  • @sajidhasan1161
    @sajidhasan1161 5 років тому

    so easily understood. thanks'

  • @alexisnlavergne
    @alexisnlavergne 4 роки тому +1

    This video made me subscribed to the channel!
    But also, what next? I'd love to know what is the next step to start mastering this classifier? I'm starting to program some but any good ressource to help debug myself?

  • @danielacarrapico2034
    @danielacarrapico2034 5 років тому +1

    Thank you! It helped a lot

  • @vivekchoudhary8745
    @vivekchoudhary8745 5 років тому

    best explanation, the thing which separates is that there is theory and a well defines numerical of the data.
    keep going

  • @aspdeepak4yt
    @aspdeepak4yt 6 років тому

    Great explanation !!

  • @akino.3192
    @akino.3192 6 років тому +1

    I am fairly new to Naive Bayes - however, shouldn't 'P(Outlook = sunny | Play = Yes)' be interpreted as "probability that the outlook is sunny given that we can play"? and not the other way around?

  • @balamurugann1461
    @balamurugann1461 4 роки тому +1

    Thanks for it. Much appreciated.

  • @chloekimball536
    @chloekimball536 6 років тому

    Liked, subscribed, commented. This has to be the best explanation on naive Bayes classifier ever. thank you sir. And cheers! But please clarify why 0.2424 + 0.9421 is turning out to be >1

  • @vaisakhv9916
    @vaisakhv9916 5 років тому +43

    please decrease the volume of background music

    • @Kyodu
      @Kyodu 5 років тому +8

      Or remove it completely feels like a yoga tutorial. Although, I like your style I could not watch more than a few minutes.

  • @timobohnstedt5143
    @timobohnstedt5143 5 років тому

    Handy Video and perfect to understand the mathematical principals we have learned in class much better. Thank u :)

  • @mostafanakhaei2487
    @mostafanakhaei2487 5 років тому

    Brilliant, How did you make the slides? was fun.

  • @Skyfox94
    @Skyfox94 4 роки тому

    I think this was a very good overview of how this algorithm works, however given the length of the video I'm assuming a lot of things have been left out. It works nicely as a "primer" - a jumping-off point for people who want to get an idea of how it works.

  • @rajatietl
    @rajatietl 5 років тому +2

    Best video for intution behind ML algorithm

  • @junaid5388
    @junaid5388 7 років тому +1

    What a nice presentation!!

    • @Augmented_AI
      @Augmented_AI  7 років тому

      +junaid shahid thank you Junaid. I really appreciate it. :)

  • @sembutininverse
    @sembutininverse 4 роки тому +1

    thank you 🙏🏻🙏🏻🙏🏻🙏🏻🙏🏻🙏🏻

  • @sasankv9919
    @sasankv9919 6 років тому +1

    Subbed. Thanks for excellent tutorial

  • @urjadamodar4093
    @urjadamodar4093 5 років тому +1

    at 3.43: the higher the probabilty of yes the higher is the probability we can play. Then why they selected the options where the probability is less for having higher chances to play?

  • @bellicose2009
    @bellicose2009 7 років тому +5

    Brilliant, well presented!

  • @allall02
    @allall02 6 років тому +1

    Thank you, great video!

  • @sandeepranote9711
    @sandeepranote9711 5 років тому +1

    Loved the video. So easy to understand and everything explained beautifully! Thanks a lot for creating this video! :)

  • @jayananuranga5485
    @jayananuranga5485 4 роки тому +1

    very important easy to learn

  • @karanshukla6889
    @karanshukla6889 5 років тому +4

    Excellent video, I understood everything

  • @victorlima8018
    @victorlima8018 6 років тому

    great video, helped me a lot

    • @Augmented_AI
      @Augmented_AI  6 років тому

      I'm really glad that it helped 😀

  • @renzocoppola4664
    @renzocoppola4664 6 років тому +7

    Shouldn't it add up to 1? Even though normalization isn't necessary.

    • @Filmsuper95
      @Filmsuper95 5 років тому +1

      Because it assumes independence for all features, even if this is not entirely the case

    • @moyube7475
      @moyube7475 5 років тому

      Yeah, almost right. Normalization would have shown that P(Play=No|X) = 0.7954 > P(Play=Yes|X) = 0.2046

  • @nazimrazali2773
    @nazimrazali2773 5 років тому

    Thank you. very easy to understand and learn. hoping u can make a video for Bayesian Network too since u already show naive bayes and bayes theorem :)

  • @kamleshbhalui
    @kamleshbhalui 6 років тому +1

    Nicely Explained!

  • @vijanth
    @vijanth 4 роки тому +1

    Really good. Class of his own

  • @ssshukla26
    @ssshukla26 4 роки тому +1

    Excellent.

  • @abeltan2168
    @abeltan2168 5 років тому

    5:02 Why is final probability equal to the product of all the classes? Im confused

    • @vannatrin2303
      @vannatrin2303 5 років тому

      Abel Tan it is assumed as a proportional probability. As I guessed

  • @a_level_math704
    @a_level_math704 4 роки тому +1

    commendable efforts brother! :)

    • @Augmented_AI
      @Augmented_AI  4 роки тому

      Thanks man I really appreciate it 😊👍😁

  • @binhnguyenam2546
    @binhnguyenam2546 4 роки тому

    There are some mistakes when calculating P(Ci|X) = P(Play=Yes|X) = P(X|Play=Yes)*P(Play=Yes). Because we were calculating only one case (Ci -> Play=Yes) for each case of attributes. The lack of situation include:
    + P(Overcast|Play=Yes), P(Rainy|Play=Yes), P(Hot|Play=Yes)...
    And the same condition with (Ci -> Play=No).
    After calculating P(X|Ci), we will calculate P(Ci|X) = P(X|Ci)*P(Ci).
    These probabilities of P(Play=Yes|X) + P(Play=No|X) = 1. In some situations, when X would be specified by condition, this addition would not be 1 or 100%.

  • @ereniacastellan4587
    @ereniacastellan4587 6 років тому

    Thank you, great explanation, it helped me on last minutes

  • @jasonperhaps
    @jasonperhaps 5 років тому +1

    Gotta learn bayes

  • @anubhavsrivastava850
    @anubhavsrivastava850 4 роки тому

    I am confused with we take outlook = overcast then what will be equation for the NO. because it has zero which will make the whole equation zero. ....???

  • @elliuslurenzpino2005
    @elliuslurenzpino2005 6 років тому +1

    I'm new to machine learning and I would just like to know if Bayes Classifier is a non-linear algorithm, thanks :D

  • @sindhuchinniah4363
    @sindhuchinniah4363 6 років тому

    Good Video! Thanks

  • @kamranshaik7049
    @kamranshaik7049 7 років тому +7

    The best video

    • @Augmented_AI
      @Augmented_AI  7 років тому +1

      +kamran shaik thank you so much. I'm really glad you enjoyed this video. :) I really appreciate it.

  • @aashwinbhushal3467
    @aashwinbhushal3467 4 роки тому

    how did get that formula to calculate p(x)?

  • @naganathavanthianchellam3669
    @naganathavanthianchellam3669 6 років тому +1

    Excellent Tutorial !!!! Keep up the Great Work!!!

    • @Augmented_AI
      @Augmented_AI  6 років тому

      Thank you so much. It means a lot to me :)

  • @luanacs37
    @luanacs37 3 роки тому

    Obrigada! Ajudou muito 🇧🇷

  • @numidian19
    @numidian19 4 роки тому

    Great tutorial thank you, could u plz tell me the name of the software that u used to make such tutorial?

  • @SapnaGupta-vv5fr
    @SapnaGupta-vv5fr 5 років тому +1

    Thank you ...u make it possible learn while fun..

  • @engr.alidawoodi6461
    @engr.alidawoodi6461 5 років тому +2

    its really cool

  • @Leo-wt9yd
    @Leo-wt9yd 4 роки тому

    sorry, but at the 7:50 minute, it shouldn't be (the probability we play given its sunny) = (the probability its sunny given we play) * (the probability we play) / (the probability its sunny) if the formula is P(A|C)=P(C|A)*P(A)/P(C)?

  • @grigolperadze7732
    @grigolperadze7732 6 років тому +19

    P>1, Its so fundamental mistake :D please change it

  • @soumachatterjee9399
    @soumachatterjee9399 5 років тому

    what if a column contains numerical values only ? How to do prediction on that ?

  • @AZI3623
    @AZI3623 6 років тому

    Can you tell me which algorithm is best for Data Mining in CRM?

  • @hocvienitcu
    @hocvienitcu 7 років тому

    thumb up . easy to understand

    • @Augmented_AI
      @Augmented_AI  7 років тому

      +Vinh T. Nguyen thank you so much, glad you enjoyed it. :)

  • @soufianebenkhaldoun7765
    @soufianebenkhaldoun7765 6 років тому

    simple and clear tutorial, thank you !

  • @rfajrikahadnisputra9183
    @rfajrikahadnisputra9183 7 років тому +4

    Wind Strong = TRUE ??