NAIVE BAYES CLASSIFICATION ALGORITHM SOLVED NUMERICAL

Поділитися
Вставка
  • Опубліковано 5 лис 2024

КОМЕНТАРІ • 84

  • @mohammadrachman1344
    @mohammadrachman1344 Рік тому +1

    there is alot video talking about naive bayes but this is the one and only explanation naive bayes with numeric data, thank you

  • @recordingsp7833
    @recordingsp7833 5 років тому +11

    Best explanation on UA-cam to solve numerical on naive Bayes classification algorithm

  • @radhaingle4664
    @radhaingle4664 5 років тому +12

    Thank you very much for help... One of the best videos on UA-cam for this numerical on naive Bayes classification algorithm..cleared the concept.. Thanks again

    • @jasperarian1789
      @jasperarian1789 3 роки тому

      you all probably dont give a shit but does any of you know a trick to log back into an instagram account..?
      I somehow forgot my login password. I would love any tricks you can offer me.

    • @tushar3549
      @tushar3549 2 роки тому

      You are also for him . Enter my channel and do subscribe plz. I will make tutorial like this. For beginner you're support will help me.

  • @vaddadisairahul2956
    @vaddadisairahul2956 4 роки тому +9

    The formula for Normal/Gaussian Distribution is wrong; It should be exp-((A i - mew ij) ^2)/2 * sigma^2). Please stick to the standard formula and don't make own changes

  • @dkrecord6265
    @dkrecord6265 4 роки тому +1

    Thank you very much now I can solve any type of problems of naive bayes classification 😊....your videos are really very helpful.... keep uploading more videos.....

  • @AmanSingh-js3ho
    @AmanSingh-js3ho 5 років тому +6

    Thank you bro, now I can solve any type of problems of naive bayes classification 😊

    • @tushar3549
      @tushar3549 2 роки тому

      You are also for him . Enter my channel and do subscribe plz. I will make tutorial like this. For beginner you're support will help me.

  • @prashantgore4725
    @prashantgore4725 5 років тому +1

    That's a clear and crisp explanation...thanks for uploading...it made the things easy for me....thanks mr. Yogesh Murumkar

    • @tushar3549
      @tushar3549 2 роки тому

      You are also for him . Enter my channel and do subscribe plz. I will make tutorial like this. For beginner you're support will help me.

  • @abhishekkale3589
    @abhishekkale3589 4 роки тому +4

    This is perfect!!!10 out of 10.... amazing simplicity!!!!!it helped me really a lot.... thank you!!!

    • @tushar3549
      @tushar3549 2 роки тому

      You are also for him . Enter my channel and do subscribe plz. I will make tutorial like this. For beginner you're support will help me.

    • @adharshrajtony5316
      @adharshrajtony5316 Рік тому

      Tq

  • @akshaysahastrabuddhe1558
    @akshaysahastrabuddhe1558 5 років тому +1

    thats simply an awesome explanation....thanks a lot for uploading....best video to solve numerical on naive bayes classification algorithm

  • @saurabhsarage6554
    @saurabhsarage6554 5 років тому +5

    Thank you sir....... Simple explanation.....

  • @shamaligunje5959
    @shamaligunje5959 2 роки тому

    best explanation! I understood everything. It will surely help me in my exams. Thank u so much !!

  • @radhaingle4664
    @radhaingle4664 5 років тому +9

    Really appreciated... Best

  • @summ4628
    @summ4628 4 роки тому +3

    Excellent

  • @sganguly01
    @sganguly01 3 роки тому +1

    this is the best explanation of N.B. out of many that I have seen! great job

  • @wussboi
    @wussboi 3 роки тому

    Many years!!! Wish u good health!! Save my life...

  • @suyashjadhav3756
    @suyashjadhav3756 4 роки тому +3

    P(Income=120k|Class=Yes) = 0.145 not 1.2*(10^-9)

  • @omkaravghade9732
    @omkaravghade9732 Рік тому

    very nice explanation sir

  • @bharatsoft8886
    @bharatsoft8886 5 років тому +6

    awesome.....

  • @adityapatil1245
    @adityapatil1245 Рік тому +1

    In calculation of normal distribution for “yes” from were did 10^(-9) came?

  • @tejaswinimudholkar1383
    @tejaswinimudholkar1383 5 років тому +8

    Thank you sir...

  • @ellenamori1549
    @ellenamori1549 2 роки тому

    Thank you for such a great lecture. It helps a lot to understand my course material.

    • @tushar3549
      @tushar3549 2 роки тому

      You are also for him . Enter my channel and do subscribe plz. I will make tutorial like this. For beginner you're support will help me.

  • @TechSupport807
    @TechSupport807 Рік тому

    Very good explanation but frequency table was not calculated ..!!
    Class teacher taught us in discrete values not in case of salary ..!!
    By the way very well explanation 🤎

  • @mayankdhiman8892
    @mayankdhiman8892 3 роки тому

    Aapka hi sahara hai...

  • @01_mayuriadhao75
    @01_mayuriadhao75 4 роки тому +2

    Awesome man.... excellent

  • @eathealthywithumang
    @eathealthywithumang 3 роки тому +1

    Thank you so much sir!!! Great explanation

  • @erompanhale24
    @erompanhale24 5 років тому +3

    Sir please upload the video of impuritiy measure (Gini index and entropy)

  • @aashishpokharel9895
    @aashishpokharel9895 Рік тому

    Why has the formula for Normal distribution modified here? Isn't the formula (the front) * e ^ (X- u)**2 / 2 * pi (Sigma)**2 ???

  • @sudhadevi3211
    @sudhadevi3211 4 роки тому +2

    Good explanation!!

  • @ra4019
    @ra4019 2 роки тому

    what will happened if one of the samples does not exist like taxable amount = 80k
    will be undefined but can we solve it or not !?

  • @zeynepsenel3265
    @zeynepsenel3265 2 роки тому

    Great explanation! Thank you.

  • @mastercomputersciencesubje3181
    @mastercomputersciencesubje3181 4 роки тому

    neatly explained

  • @sagarmarri213
    @sagarmarri213 3 роки тому

    Awesome explanation bro.

  • @muhammadrafianazim2468
    @muhammadrafianazim2468 2 роки тому

    thanks sir, my homework is 100% same like this. hahaha thats make my day easier

  • @umamaheswariyarlagadda9033
    @umamaheswariyarlagadda9033 3 роки тому

    Hi,
    Laplace smoothing to be done to avoid the zero probability calculation. Appreciate if you can make a video on laplace smoothing in Naive bayes with a solved example.
    Nevertheless, the video is really helpful.

    • @bluestar2253
      @bluestar2253 2 роки тому

      That's correct. Laplace smoothing should have been used in this example.

  • @lewishoanglong1610
    @lewishoanglong1610 3 роки тому

    Your explain is pretty good! Thank you

  • @nikhily8646
    @nikhily8646 2 роки тому +1

    Huge thanks to u 🙏🙏🙏🙏🙏

  • @biradardhanraj9008
    @biradardhanraj9008 4 роки тому

    clearly explained ...thank you so much

  • @misterbean3368
    @misterbean3368 4 роки тому

    What if the income given in the instance is not 120k and some other value which is not present in the taxable amount column like 150k?

  • @rohithchintu1120
    @rohithchintu1120 5 років тому +1

    can you do a video on Maximal Frequent Item Set, Closed Frequent Item Set in data mining?
    It will be really helpful to me.

  • @AWSFan
    @AWSFan 4 роки тому

    Awesome Explanation!

  • @harshkumaryadav3285
    @harshkumaryadav3285 2 роки тому

    sir ,
    How to calculate the exponential term

  • @shahzodamirov3762
    @shahzodamirov3762 2 роки тому

    Perfect explanation, thanks a lot !

  • @minalpatil5671
    @minalpatil5671 3 роки тому

    can you show how to construct a decision tree for the same dataset?

  • @nikitasinha8181
    @nikitasinha8181 2 роки тому

    Thank you so much sir

  • @kirans9983
    @kirans9983 4 роки тому +1

    Shouldnt it be P(X/class=No)=P(no)*P(Refund=No/class=no)*P(Married/class=No)*P(income=120k/class=No). You missed out P(No).
    Similarly You missed P(yes) for P(X/class=yes).

  • @bhaktiawasarkar7387
    @bhaktiawasarkar7387 4 роки тому

    In last which vlaue we consider maximum value or minimum

  • @techienomadiso8970
    @techienomadiso8970 Рік тому

    How is P(A/D) = P(D/A) * P(A)/P(D) related to P(X/D) = P(A/D) * P(B/D) * P(C/D) ?

  • @nathiya597
    @nathiya597 4 роки тому +1

    Sir I want ID3 problem in the same question

  • @k.kaushikreddy1792
    @k.kaushikreddy1792 4 роки тому

    super clarity

  • @ali-r2n7t
    @ali-r2n7t 3 роки тому

    thank you so much i need desision tree continous attribute

  • @jamesfernandez5893
    @jamesfernandez5893 4 роки тому

    Excellent!!!!

  • @swethakuchana2093
    @swethakuchana2093 4 роки тому

    How we should take X value 🤔

  • @sevalpatel1997
    @sevalpatel1997 3 роки тому

    You are supposed to handle zero probability error in the solution.

  • @pratikjaybhaye1292
    @pratikjaybhaye1292 Рік тому

    dev माणूस 🍻

  • @VarunHCS
    @VarunHCS 2 роки тому

    thanks a lot man

  • @bluestar2253
    @bluestar2253 2 роки тому

    Dude, it's not right to have a conditional probability of zero in the multiplication! That's why we use the Laplace smoothing. Read it up!

  • @hidayathidayat4469
    @hidayathidayat4469 4 роки тому

    thank you its help me

  • @kishan6984
    @kishan6984 2 роки тому

    0.0072 is not the answer @ 22:02
    I got it as 0.03928 for the normalization

    • @NathanLAlvares
      @NathanLAlvares Рік тому

      nope. its correct. check again your calculation

    • @kishan6984
      @kishan6984 Рік тому

      @@NathanLAlvares I didn't understand the calculations what is done... can u do step by step

  • @raghavddps2
    @raghavddps2 4 роки тому

    The formula for the normal distribution was wrong

  • @dipanshusharma869
    @dipanshusharma869 5 років тому

    Sir I think you have calculated wrong contidional probability
    And thanks for making video

  • @ashutoshsarode4191
    @ashutoshsarode4191 4 роки тому +1

    Kuch...
    Jata long nhi hay

  • @shrenikjadhav9982
    @shrenikjadhav9982 11 місяців тому

    This is all wrong

    • @rdata8269
      @rdata8269 11 місяців тому +1

      Don't be jealous man ...he is doing good job

  • @rajeshmurumkar1638
    @rajeshmurumkar1638 4 роки тому +1

    Great job. You deserve more views