yes there is mistake in calculation, if all base model accuracy is less then 51%. if it is 0.3. Then voting or mean is = 3*0.063+0.027. which is 0.216. Which is worst then all three.
I am from the north part of India but the job opportunities have brought me down south and I have been living from a decade now. I think I am decent and fairly good in english, but learning in your native and mothe tongue is a diff feeling. It makes you more powerful, allow you dive deeper. And having a teacher like Nitish ji is magical.
Taking individual probabs as 0.8, 0.7, 0.6, it gives 0.788 as ensemble accuracy. Thus, its not mandatory to have highest accuracy in ensembled model. Currently I don't know why that's the reason.
Great video! It is not always guaranteed to have collective accuracy higher than the individual ones. If models within ensemble are making same error or have same weaknesses, the ensemble may not provide significant improvements.
Boasting is the prodigy child for every ml project but you can play around things and get you best result, ML is subjective and varies from datasets to datasets
But isn't it like the points that the first classifier predicted wrong are actually points that are difficult to predict so all of our models are more likely to predict those points wrong. Just thinking, The mathematical proof is always valid considering them as independent models.
No, the points that first classifier predicted wrong is only difficult for that specific model, and our Sir already said we have to use different types of model. So what is difficult for one model might not be difficult for other. What you are saying will be true only when you use same type of model.
He is one of the best Data science teacher .Lots of Respect.
I think this is the best channel to study data science algorithms...
11:48 the last three Probabilities written by mistake it will be 0.063 + 0.063 + 0.027 don't be confuse
Yah
yes there is mistake in calculation, if all base model accuracy is less then 51%. if it is 0.3. Then voting or mean is = 3*0.063+0.027. which is 0.216. Which is worst then all three.
I think learning from you is way better than any of the paid course out there
Thank You Sir.
i think the answer to the question you asked at 5:04 is wisdom of crowd !
But you have to prove it by mathematic because ml is all about maths
i never knew hindi can be so powerful tool in this journey of data science.
I am from the north part of India but the job opportunities have brought me down south and I have been living from a decade now. I think I am decent and fairly good in english, but learning in your native and mothe tongue is a diff feeling. It makes you more powerful, allow you dive deeper. And having a teacher like Nitish ji is magical.
Taking individual probabs as 0.8, 0.7, 0.6, it gives 0.788 as ensemble accuracy. Thus, its not mandatory to have highest accuracy in ensembled model. Currently I don't know why that's the reason.
I cant believe I completed 84 videos with notes within 14 days. Just few days more❤❤
Wanted to thank you before starting the lecture thanks a lot sir
You mean a lot to me praying for your always goodness
Wonderful Lecture .
Thanks for your lecture.
you are a great teacher . The best .
Great video!
It is not always guaranteed to have collective accuracy higher than the individual ones. If models within ensemble are making same error or have same weaknesses, the ensemble may not provide significant improvements.
Thanku so much sir. Your teaching is miraculous.
Next level explaination by you sir
Maza aagaya sir ji
12:14 Adding all the probabilities won't come out as 1
It will. try again
@@TheAtulsachan1234 Nope
@@123arskas11:48 the last three Probabilities written by mistake it will be 0.063 + 0.063 + 0.027 don't be confuse
Can anyone tell me how the accuracy were calculated here?
Loved the explanation! :)
Which ensemble algorithm perform well in multiclass classification problem?
Boasting is the prodigy child for every ml project but you can play around things and get you best result, ML is subjective and varies from datasets to datasets
Thank u .
But isn't it like the points that the first classifier predicted wrong are actually points that are difficult to predict so all of our models are more likely to predict those points wrong.
Just thinking, The mathematical proof is always valid considering them as independent models.
No, the points that first classifier predicted wrong is only difficult for that specific model, and our Sir already said we have to use different types of model. So what is difficult for one model might not be difficult for other. What you are saying will be true only when you use same type of model.
Sir but probability examples looks like boosting ensemble
u r god
best
please improve your audio quality
finished watching
finished note making
Additional of all probabilities are not 1 also point raise by
@vishnupsharma50 is also valid, given proof by Nitish Sir not satisfy here