0:06-Ensemble Techniques 0:30-Bagging is also called bootstrap aggregation 1:40-In Bagg we divide the data into various samples(on the basis of row sampling with replacement) depending on the number of ML models(Base Learners)known as Bootstrap and then the output which is in majority after running all the models is considered(Voting Classifier)also known as Aggregation 2:53-Row Sampling with replacement 4:02- 4:50-BootStrap 5:14-Aggregation
This is just like "America got talent"/ "India got talent" show. Where we have participant(as data) performing his/her show and there are 4-5 judge(as model) and participant selection is based on judges review. Its one way perception to learn.
You have made it super clear. It shows your time investment on the subject. As the saying goes "If you can't explain it to a six-year-old, then you don't understand it yourself” Albert Einstein
Because we are doing the selection with replacement. Example :- Suppose you have a bag with 10 red balls, now you draw 5 balls at a time but with replacement. So after first draw you put all the 5 balls back in the bag now for the next draw you will have 10 balls in total. That's why
Thanku so much sir for a wonderful explaination. Concept of Bootstrap Aggregation is very clear n nicely told. Your channel is very awsome, great videos.
so to say a) a fit will be done with the model sample data of each model b) and there will be n such models for which fit will be done c) and prediction also will be done on each model and the d) result of each models prediction will be averaged if it is regression problem and e) voting shall be made if it is classification - how does voting go when we have a test set of data , we would have all the cases in the test model in case of classification then how does voting take place
Got a clear idea. One small suggestion. It would be nice if you explain the concepts based on some sample datasets consisting of a few rows and cols for explanation sake. That would give more detailed understanding.
You are absolutely right bro, In bagging we do not subset the training data into smaller chunks and train each tree on a different chunk. Rather, if we have a sample of size N, we are still feeding each tree a training set of size N (unless specified otherwise). But instead of the original training data, we take a random sample of size N with replacement. For example, if our training data was [1, 2, 3, 4, 5, 6] then we might give one of our trees the following list [1, 2, 2, 3, 6, 6]. Notice that both lists are of length six and that “2” and “6” are both repeated in the randomly selected training data we give to our tree (because we sample with replacement).
Your explanation is awesome ❣️ and thank you so much for making these video for us I request you to provide your full notes of machine learning so it could be so easy for us ❣️✨it may possible to get high score in machine learning ❣️ Thank you once again ❣️✨.....
Thanks a lot, Mr. Naik. I have two questions. the first one is, can we use neural networks instead of decision trees? The second one is, m must be greater or equal to or less than n?
When the base learners are created i.e different models, will the modles be only the ones that are comparible to solve Binary model as you said to cinsider the modle as Binary Regresion ?
Hi I am from none technical background and working in bpo in backend process not related to any technique, simply cut copy paste But looking forward for SAS ANALYTICS, PLS SUGGEST WHICH TECHNIQUE NEED TO LEARN, ALSO SHARE SOME VIDEOS LINK
I dont get why not train all the models with all the training data available? If you seperate the data randomly the original distribution will get affected and models will be randomly good or bad depending on how lucky they were to get a close distribution in the splitting
With whole data only one model can be created bro if we chose a sample from a given dataset then only everytime our model will give slightly different results so we combine the results of diff model
you said about sampling of rows that means it differs in rows does that mean each subset of original dataset has same columns which are in original dataset.
Very good explanation. A doubt if so one could clarify it will be help ful.. In Testing data set , will it also be given different test data for each model
Sir when you say at 3:09 row sampling with replacement, you mean that some values can be repeated in different D'm ? So why not call it Row Sampling with Repetition ?
0:06-Ensemble Techniques
0:30-Bagging is also called bootstrap aggregation
1:40-In Bagg we divide the data into various samples(on the basis of row sampling with replacement) depending on the number of ML models(Base Learners)known as Bootstrap and then the output which is in majority after running all the models is considered(Voting Classifier)also known as Aggregation
2:53-Row Sampling with replacement
4:02-
4:50-BootStrap
5:14-Aggregation
The best explanation of Bagging. I logged in just to write this comment down. Keep it up. Thanks a lot!
2:35 m
Sitting in my MSc AI class in London and watching him because he is just better!
You the people again proved that UA-cam is the best learning platform. Thank u so much sir for being a part of your UA-cam student❤
just take a moment and appreciate the brilliance of this guy! Once again saved me from reading countless pages...
This is just like "America got talent"/ "India got talent" show. Where we have participant(as data) performing his/her show and there are 4-5 judge(as model) and participant selection is based on judges review.
Its one way perception to learn.
The way you explained the concepts are very easy and understandable. Keep doing the same. Thanks a lot.
You are a god for the One day exam preparation students 🙏
You have made it super clear. It shows your time investment on the subject. As the saying goes "If you can't explain it to a six-year-old, then you don't understand it yourself” Albert Einstein
The video explains bagging extremely clearly. Thanks for the upload!
I respect your simplicity and reserved a thumb up.
I think if I watch all your playlists I will definitely be confident that I will learn a lot
PS : m>n is how it is. same sample is taken multiple times
Because we are doing the selection with replacement. Example :- Suppose you have a bag with 10 red balls, now you draw 5 balls at a time but with replacement. So after first draw you put all the 5 balls back in the bag now for the next draw you will have 10 balls in total. That's why
you explained everything in a very simple language......i always watch your videos for machine learning....thank you
Krish, this kind of videos are I looking for. The way you are teaching is very much understandable. Thanks for your videos
please keep making videos. Your videos are easy to understand and clears up the concept!
I am doing Master's in USA, thank you for this explanation.
So easy to understand !!!! Thanks.. Greetings from Brazil
Woow your an amaizing teacher, Well done i have understood the concepts thank to you thank you very much.
Thanks for explaining in simple words.
finally i got the concept.hats off sir..
Very well explained. I came here because my University professor totally messed up the explanation of this simple technique!
One video and thats it concept clear
Thanks a lot sir
you just amazing, you have the sipmlest and clearest explanations
You are champ!! what an explanation. Thank you so much sir
great explanation in simple language!
best explanation ever. You are really good at explaining things. Keep it up. looking forward to more detailed machine learning model videos. Thank you
Thanku so much sir for a wonderful explaination. Concept of Bootstrap Aggregation is very clear n nicely told. Your channel is very awsome, great videos.
This is wonderful. Thank you for such a simple and easy understanding explanation sir.
Well explained. Thank you. I was getting confused with the textual explanation.
Best lecture on bootstrapping
This is actually a really nice explanation. Keep it up.
You're so good at explaining !
sir, you are soo good at simplifying and explaining the complex topics
Superb video Keish. once again. Thanks
You are the only one that explained this right, thank you very much!
so to say
a) a fit will be done with the model sample data of each model
b) and there will be n such models for which fit will be done
c) and prediction also will be done on each model and the
d) result of each models prediction will be averaged if it is regression problem and
e) voting shall be made if it is classification - how does voting go when we have a test set of data , we would have all the cases in the
test model in case of classification then how does voting take place
Greatly and deeply-explained. God bless and thanks a lot
Ver Well and Simply explained Krish
Sir are you sure in this example data m
Awesome tutorial on BAGGING
Great Explanation .. Thanks Krish
Thank you sir for easy explanation.
This video was very informative, very well explained. Please keep helping students who need help. Be good and take care.
Loved it!!! 💙💙💙
thank you so much for your hard work! this is by far the best explanation I could grab my head around! keep up your good work!!
thank you very much for very good explanation Krish , wish the best for you
Got a clear idea. One small suggestion. It would be nice if you explain the concepts based on some sample datasets consisting of a few rows and cols for explanation sake. That would give more detailed understanding.
Great Explaining! I am in Data Science Masters Program in Data mining class.
You have a great channel man, keep up the good work.
Patrice o'Neal fan?
Excellent Krish.
amazing content, thank you !
Such a good explanation!
Thank you for the explanation. It is not necessary that m should be less than n. It can be equal as well.
You are absolutely right bro, In bagging we do not subset the training data into smaller chunks and train each tree on a different chunk. Rather, if we have a sample of size N, we are still feeding each tree a training set of size N (unless specified otherwise). But instead of the original training data, we take a random sample of size N with replacement. For example, if our training data was [1, 2, 3, 4, 5, 6] then we might give one of our trees the following list [1, 2, 2, 3, 6, 6]. Notice that both lists are of length six and that “2” and “6” are both repeated in the randomly selected training data we give to our tree (because we sample with replacement).
Great explanation
yes, very nicely explained. you are very clear, thank you! :)
It was an amazing explanation! Thank you a lot.
Amazing video!
Perfect explanation!
Your explanation is awesome ❣️ and thank you so much for making these video for us
I request you to provide your full notes of machine learning so it could be so easy for us ❣️✨it may possible to get high score in machine learning ❣️
Thank you once again ❣️✨.....
thanks a lot ur session was helpful .
Indian guys literally save the rest of the world and do the explanation job better than my teacher !
Amazing explanation!
Nice explanation. Thank you!
Awesome krish
I love your videos thank you
you are awesome!! Thanks a lot.
Very clear ❤
thank you so much ❤
Thanks Krish
This is good explanation
Gem of a tutorial!!!
Nice explanation
Nice explaination
Excellent!!!
Thanks A Lot Sir!!
really nice video thank u
would you please link which playlist is this video a part of?
Thanks ❤
Thanks a lot, Mr. Naik.
I have two questions.
the first one is, can we use neural networks instead of decision trees?
The second one is, m must be greater or equal to or less than n?
Good Explanation, is this m1,m2,m3 models are same classifiers or different classifiers
Thank you ♥️
When the base learners are created i.e different models, will the modles be only the ones that are comparible to solve Binary model as you said to cinsider the modle as Binary Regresion ?
Subscribed.... For ur dedication
What is the difference between bagging, boosting, voting and stacking? Random Forest and XGB can be related with which method
Really helpful
Hi
I am from none technical background and working in bpo in backend process not related to any technique, simply cut copy paste
But looking forward for SAS ANALYTICS,
PLS SUGGEST WHICH TECHNIQUE NEED TO LEARN, ALSO SHARE SOME VIDEOS LINK
I dont get why not train all the models with all the training data available? If you seperate the data randomly the original distribution will get affected and models will be randomly good or bad depending on how lucky they were to get a close distribution in the splitting
With whole data only one model can be created bro if we chose a sample from a given dataset then only everytime our model will give slightly different results so we combine the results of diff model
you said about sampling of rows that means it differs in rows does that mean each subset of original dataset has same columns which are in original dataset.
damn this is an excellent explanation
Thanks for all your good video. Your explanation are very good. But you don't tell ps , when we used and why. What is the goal?
Best video
Could you teach us how to calculate uncertainty in regression model for each test data set
Krish...Explain how to reduce an error for Regression and classification models ,,,Thanks
go through EDA and feature engineering part, maybe improving them may reduce error.
Understand the mathematics behind the models
should it not be n >= m instead of n>m ?
Very good explanation. A doubt if so one could clarify it will be help ful.. In Testing data set , will it also be given different test data for each model
What if we use regression technique ?? Will it still use majority voting or it will take average of all the base models ??
Sir, you are simply too much!
Sir when you say at 3:09 row sampling with replacement, you mean that some values can be repeated in different D'm ? So why not call it Row Sampling with Repetition ?
"with replacement" is a statistical lingo for the same thing you mentioned, specifically in terms of probability. Hope this clarifies.