Good turorial. My thoughts below (hope it adds to someone's understanding): We perform cross validation (to make sure that model has good accuracy rate and it can be used for prediction using unseen/new or test data). To do so, we use train and test data by properly splitting our dataset for example 80% for training, 20% for testing the model. This can be performed using train_test, train_test_split or K-fold (K-fold is mostly used to avoid under and overfiting problems). A model is considered as a good model when it gives high accuracy using training as well as testing data. Good accuracy on test data means, model will have good accuracy when it is trying to make predictions on new or unseen data for example, using the data which is not included in the training set. Good accuracy also means that the value predicted by the model will be very much close to the actual value. Bias will be low and variance will be high when model performs well on the training data but performs bad or poorly on the test data. High variance means the model cannot generalize to new or unseen data. (This is the case of overfiting) If the model performs poorly (means less accurate and cannot generalize) on both training data and test data, it means it has high bias and high variance. (This is the case of underfiting) If model performs well on both test and training data. Performs well meaning, predictions are close to actual values for unseens data so accuracy will be high. In this case, bias will be low and variance will also be low. The best model must have low bias (low error rate on training data) and low variance (can generalize and has low error rate on new or test data). (This is the case for best fit model) so always have low bias and low variance for your models.
This video need to be watched again and again.Machine learning is nothing but proper understanding of ovrfitting and underfitting..Watching the second time.Thanks Krish
This is what they asked me in OLA interview. And the interviewer covered great depth on this topic only. It's pretty fundamental to ML. Sad to report they rejected me though.
At 06:08 it is said that the underfitted data, the model has high bias and high variability. To my understanding, the information is not correct. Variance is the complexity of a model that can capture the internal distribution of the data points in the training set. When variance is high, the model will be fitted to most (even all) of the traiining data points. It will result in high training accruacy and low test accuracy. So in summary : When the model is overfitted : Low bias and high variance When the model is underfitted :High bias and Low variance Bias : The INABILITY of the model to be fit on the training data Variance : The complexity of the model which helps the model to fit with the training data.
XGBoost, the answer cant be simple, but what happens is when dealing with high bias, do better feature engineering n decrease regularization, so in XGBoost we increase depth of each tree and other techniques to handle it to minimize the loss...so you can come to conclusion that if proper parameters are defined (including regularization etc) it ll yield low bias and low variance
Underfitting : High Bias and Low Variance OverFitting : Low Bias and High Variance and Generalized Model : Low Bias & Low Variance. Bias : Error from Training Data Variance : Error from Testing Data @Krish Please confirm
@@videoinfluencers3415 I mean under fitting model has low accuracy on Testing and Training Data and the difference between the Training accuracy and test accuracy is very less, that's why we get low variance and high biased in Under fitting models.
If it makes it any clear for other learners, here's my explanation... BIAS is the simplifying assumptions made by a model to make the target function (the underlying function that the ML model is trying to learn) easier to learn. VARIANCE refers to the changes to the estimate of the target function that occur if the dataset is changed when implementing the model. Considering the linear model in the example, it makes an assumption that the input and output are related linearly causing the target function to underfit and hence giving HIGH BIAS ERROR. But the same model when used with similar test data, will give quite similar results and hence giving LOW VARIANCE ERROR. I hope this clears the doubt.
I have been trying to understand this concept since long ... But never knew its this simple 😀 thank u Krish for this amazingly simple explanation to understand.
Very thorough and good explanation! Thank you. Side note: Would like to point out that 2:12 the degree of polynomial is still 2 (its still a quadratic function).
Not really it will depend how do you tune the hyperparameters of the model, for this reason it is important to tune a model in order to find a compromise that ensure a low biais (capacity of the model to fit a theoritical function) and low variance (capacity of model to generalisation)
One video all clear content... thanks bro it was really a nice session.. u really belong to low bias n low variance human. Keep posting such clear ML videos..
This video is great but one thing i want to correct , bias and variance works in inversely proportional manner like if we got high variance , bias will be low or High bias than variance will be low. So in Overfitting its High variance/Low Bias and in Underfitting High Bias/Low variance. In order to be best it should be low biased/low variance
You explained it so well sir. I was struglling with these terms but after watching your video my concept about bias, variance, underfitting and overfitting is crystal clear. Thank you!
As per my understanding, variance does not actually mean the test error, but the change in test error when the test data is modified. Bcoz in underfitting, the model is too much generalized so that even if we change the test data greatly also, we moreover get the same test error. Somebody correct me if I'm wrong.
On the last graph you show, Error vs Degree Of Polynomials, you mixed the curves. The red one is for the training dataset whereas the blue is for the test dataset.
Hi... your topic explanation is awesome. Just to be curious about, how you are saying bias means training error and variance means test error. Is there any intuitive explanation or mathematical derivation for that?
Excellent Explanation.. Krish , in the same video you example of XG boost i.e it model learns from the previous DT and implement the same subsequently.
Xgboost will reduce the bias as well as variance by training the subsequent model and by splitting the data. It will help us to reduce the underfitting.
Hi @Krish I read the following in a resource: "Bias refers to the gap between the value predicted by your model and the actual value of the data. In the case of high bias, your predictions are likely to be skewed in a particular direction away from the actual values. Variance describes how scattered your predicted values are in relation to each other." This doesn't imply bias as the training data error and variance as the test data error. Am I missing any point here? Please elaborate.
Hi Devasheeesh, Variance occurs when the model performs good on the trained dataset but does not do well on a dataset that it is not trained on, like a test dataset or validation dataset. Variance tells us how scattered are the predicted value from the actual value. For easier understanding of the concept, we can take it as test or validation data error. Bias is how far are the predicted values from the actual values. If the average predicted values are far off from the actual values then the bias is high.
Firstly, I thank Krish for making really informative videos. For model 2 in the classification problem. If the error between the training and the test set is not drastically different, it may not be a result of high bias and high variance. In the real world scenario, we are unlikely to get a very high accuracy i.e >90%. I would consider a high bias and high variance prob as for example - Training acc - 75% and Test acc - 65%. What do you think?
It depends on the problem statement and domain of your data. If the data is clean and abundant, you can get a high accuracy above 90% too, for both test and train. It all depends on domain understanding.
How do you know so much ? ?? You talk about machine language as if you are born embedded with all that knowledge ! God bless you with more knowledge and intelligence so that you will share with more people !
My understanding is if the model has high bias then it is underfit irrespective of variance (high or low) for underfit model ( having high bias - high train error) - if the test error is closer to train then low varience... if the test error is larger the than train then high variance Variance doesn't play a role in saying if a model is underfit or not....
Good turorial. My thoughts below (hope it adds to someone's understanding):
We perform cross validation (to make sure that model has good accuracy rate and it can be used for prediction using unseen/new or test data). To do so, we use train and test data by properly splitting our dataset for example 80% for training, 20% for testing the model. This can be performed using train_test, train_test_split or K-fold (K-fold is mostly used to avoid under and overfiting problems).
A model is considered as a good model when it gives high accuracy using training as well as testing data. Good accuracy on test data means, model will have good accuracy when it is trying to make predictions on new or unseen data for example, using the data which is not included in the training set.
Good accuracy also means that the value predicted by the model will be very much close to the actual value.
Bias will be low and variance will be high when model performs well on the training data but performs bad or poorly on the test data. High variance means the model cannot generalize to new or unseen data. (This is the case of overfiting)
If the model performs poorly (means less accurate and cannot generalize) on both training data and test data, it means it has high bias and high variance. (This is the case of underfiting)
If model performs well on both test and training data. Performs well meaning, predictions are close to actual values for unseens data so accuracy will be high. In this case, bias will be low and variance will also be low.
The best model must have low bias (low error rate on training data) and low variance (can generalize and has low error rate on new or test data).
(This is the case for best fit model) so always have low bias and low variance for your models.
Wonderful summary!
You should probably create articles coz you are good at summarising concepts!
If you have one please do share!
Great
Very well written 👍🏻
Thanks for sharing
👍🏻 Consider writing blogs
Really very nice and well written. After watching video, if we go through your summery, its a stamp on our brains. Thanks to both for your efforts.
This video need to be watched again and again.Machine learning is nothing but proper understanding of ovrfitting and underfitting..Watching the second time.Thanks Krish
Ageeed!
This is what they asked me in OLA interview. And the interviewer covered great depth on this topic only. It's pretty fundamental to ML. Sad to report they rejected me though.
@@batman9937 hi man plz help to know what other questions they asked .
@@ashishbomble8547 buy the book :: ace the data science interview by Kevin Huo and nick singh .
Hi Krish,thanks for the explanation ..6:02 it should be high bias and low variance in case of under fitting
Yes exactly i was looking for this comment
Amazing video by Krish. Thanks for pointing out this. @Krish Naik please make a note of this
yess!!!
yess
Exactly! I searched for this comment :)
At 06:08 it is said that the underfitted data, the model has high bias and high variability. To my understanding, the information is not correct.
Variance is the complexity of a model that can capture the internal distribution of the data points in the training set. When variance is high, the model will be fitted to most (even all) of the traiining data points. It will result in high training accruacy and low test accuracy.
So in summary :
When the model is overfitted : Low bias and high variance
When the model is underfitted :High bias and Low variance
Bias : The INABILITY of the model to be fit on the training data
Variance : The complexity of the model which helps the model to fit with the training data.
yes bro, you are correct
I also have same doubt. @Krish Naik sir , please have a look on it.
But under fitting suppose to have low accuracy of training data know ? Confusing !!
Have I learned the wrong definition of bias and variance by krish sir's explanation? Now I am confused😑
@prachi... not at all concept is at the end same
XGBoost, the answer cant be simple, but what happens is when dealing with high bias, do better feature engineering n decrease regularization, so in XGBoost we increase depth of each tree and other techniques to handle it to minimize the loss...so you can come to conclusion that if proper parameters are defined (including regularization etc) it ll yield low bias and low variance
This was my biggest doubt and you clarified it in so easy terms. Thank you so much Krish.
at 6:10 you made it all clear to me in just 2 lines!! Thank you for this video :)
Krish, your videos hit the nail on the head. You explained the meaning of bias and variance. Thanks a lot!
Can't express my gratitude enough ! Thank you for explaining it so well
Underfitting : High Bias and Low Variance
OverFitting : Low Bias and High Variance
and Generalized Model : Low Bias & Low Variance.
Bias : Error from Training Data
Variance : Error from Testing Data
@Krish Please confirm
I am confused ...
It means that underfitted model has high accuracy on testing data?
Underfitting : High Bias and HIGH Variance
@@videoinfluencers3415 I mean under fitting model has low accuracy on Testing and Training Data and the difference between the Training accuracy and test accuracy is very less, that's why we get low variance and high biased in Under fitting models.
You are correct bro I checked on Wikipedia also..and in some different sources too.
@Krish Please Confirm.
If it makes it any clear for other learners, here's my explanation...
BIAS is the simplifying assumptions made by a model to make the target function (the underlying function that the ML model is trying to learn) easier to learn.
VARIANCE refers to the changes to the estimate of the target function that occur if the dataset is changed when implementing the model.
Considering the linear model in the example, it makes an assumption that the input and output are related linearly causing the target function to underfit and hence giving HIGH BIAS ERROR.
But the same model when used with similar test data, will give quite similar results and hence giving LOW VARIANCE ERROR.
I hope this clears the doubt.
Beautifully explained.
But in underfitting, model shows High Bias and Low variance instead of high variance.
Yes u r right...made a minor mistake
@@krishnaik06 But then sir you said Bias is error and in underfitting training data error is low.. so should it be low bias?
@@namansinghal3685 when data has high bias, it misses out on certain observations.. So the model will be underfit..
@@namansinghal3685 in case of underfitting training error is high..not low
@@krishnaik06 You should pin this comment
krish sir i hope God bless you with whole heart you are doing great job and thanks for the INEURON it made my life easy.
Thank you very much sir fir your clear explaination on bias variance underftting and over fitting on many parameters
You can't get a clearer explanation than this, hats off mate
I have been trying to understand this concept since long ... But never knew its this simple 😀 thank u Krish for this amazingly simple explanation to understand.
Very thorough and good explanation! Thank you.
Side note: Would like to point out that 2:12 the degree of polynomial is still 2 (its still a quadratic function).
providing these info makes you a great teacher... the way you explain everything going to brain.....
This is an awesome video - was fully confused earlier - this video made it all clear !! Thanks a lot sir !!
Very succinct explanation of the very fundamental ML concept. Thank you for the video!
bhai, tu bahot sahi hai, 2.80 lacs fees bharke jo baat nahi samzi easily wo tumne 16 minutes me bata di..kudos..amazing word dear, all the very best
sir after watching this video , mera confusion ek baar mein clear ho gya between bias and variance , awsome explaination
Way of explanation is woww.
XGBoost should have low bias & low variance !
Not really it will depend how do you tune the hyperparameters of the model, for this reason it is important to tune a model in order to find a compromise that ensure a low biais (capacity of the model to fit a theoritical function) and low variance (capacity of model to generalisation)
One video all clear content... thanks bro it was really a nice session.. u really belong to low bias n low variance human. Keep posting such clear ML videos..
The most clear and precise information 🎉 thank you sir❤
One of the best explanations of Bias and Varianace w.r.t Overitting and underfitting...
What an excellent explanation on bias and variance. I finally understood both terms. Thank you so much for the video and keep up the good work!
U are Reallly great sir ... ur explanation is very much Crystal Clear
Very important discussion on important words in ML. Thanks. Easy explanation on hard words.
After watching this video doubt is clear really helping this. And Thanks given ur precious time...
Great I learnt by watching your entire playlist.
excellent tutorial. better than IIT professors who r teaching machine learning.
Krish, you are a master in statistics and machine learning
Beautifully explained. My concept are now clear on Over fitting and Under fitting models. 👍 Thanks 🍻
You have God-gifted talent to teach. You are a gem!!!!
I agree with your sentiment. He has such understanding to break down concept in a coprehensive manner
Excellent teaching
Thank you very much for the simple and proper explanation...
Bias is an error on training data ,
variance is an error on test data. Thanks for simplifying
Krish thankyou so much., this is the best channel for data science that I ever seen. Great efforts Krish. Thanks again.
6:00 Small correction in your video.
Underfitting - High Bias & Low Variance
Overfitting - Low Bias & High Variance
XGBoost has the property of low bias and high variance, however it can be regularised and turned into low bias and low variance. Useful video indeed.
Bias is in training data set and variance is in testing dataset - this line costed me linkedin machine learning job
This video is great but one thing i want to correct , bias and variance works in inversely proportional manner like if we got high variance , bias will be low or High bias than variance will be low. So in Overfitting its High variance/Low Bias and in Underfitting High Bias/Low variance.
In order to be best it should be low biased/low variance
You explained it so well sir. I was struglling with these terms but after watching your video my concept about bias, variance, underfitting and overfitting is crystal clear. Thank you!
you made my work easy by this explanation. thanks.
Thanks Krish, had scourged the net, but this understanding was great. Good memory hook! Thanks for this.
very useful lecture , it helps me much to understand this topic in a simple and easy way please keep going
GREAT SIR I GOT IT, THANKS FOR YOUR EFFORT.
Please give a video on some mathematical terminology like gradient descent etc. You are really doing a great job.
It was really good video and it clears all the doubts I have.
tbh, best video on youtube about Bias And Variance.
Today, I got clarity about this Topic, Tq u sir
Please note that Underfitting occurs when we have HIGH BIAS and LOW VARIANCE.... except that error this video is an excellent one. Thanks.
In underfitting, model performs poor on test data as well then why it has low variance. If variance = test error?
As per my understanding, variance does not actually mean the test error, but the change in test error when the test data is modified. Bcoz in underfitting, the model is too much generalized so that even if we change the test data greatly also, we moreover get the same test error. Somebody correct me if I'm wrong.
Brilliantly explained !! Thank you !!
On the last graph you show, Error vs Degree Of Polynomials, you mixed the curves. The red one is for the training dataset whereas the blue is for the test dataset.
Thank you so much for clearly explaining this. I have tried so hard to get PhD's to explain this to me .. and never got a clear answer.
ultimate discussion and person who discussed
woow awesome, great work done in one single video. insightful
Very good video, easiest video for understanding logic of bias & variance.
This guy is really great...Thank you so much for effort you put for us.
Hi... your topic explanation is awesome. Just to be curious about, how you are saying bias means training error and variance means test error. Is there any intuitive explanation or mathematical derivation for that?
Excellent Explanation.. Krish , in the same video you example of XG boost i.e it model learns from the previous DT and implement the same subsequently.
your are so awesome, I love your teaching
Well articulated, thank you Krish
Superbbb explained..it connected my dots. Thank u
Xgboost will reduce the bias as well as variance by training the subsequent model and by splitting the data. It will help us to reduce the underfitting.
Best Explanation on Bias and Variance!
I really was in great need of such an excellent explanation of Bias and variance. great help!
Perfectly explain sir
Thank You so much Krish Sir..!!
Thanks Mr. Krish for your best explanation, now I can clearly understand about Bias and Variance :D
For Xgboost low bias high variance at start in the last it low variance and low bias.(Extreme gradient bossting)
Then what is the difference between Random forrest and xgboost?what is the need for xgboost?when we can solve the problem using randomforrest?
@@Prajwal_KV Regularization is there in XGBOOST
I really love his in-depth intuition videos ... compared to his plethora of videos!
Hi @Krish
I read the following in a resource:
"Bias refers to the gap between the value predicted by your model and the
actual value of the data. In the case of high bias, your predictions are likely
to be skewed in a particular direction away from the actual values.
Variance
describes how scattered your predicted values are in relation to each other."
This doesn't imply bias as the training data error and variance as the test data error. Am I missing any point here? Please elaborate.
Hi Devasheeesh,
Variance occurs when the model performs good on the trained dataset but does not do well on a dataset that it is not trained on, like a test dataset or validation dataset. Variance tells us how scattered are the predicted value from the actual value. For easier understanding of the concept, we can take it as test or validation data error.
Bias is how far are the predicted values from the actual values. If the average predicted values are far off from the actual values then the bias is high.
XGBoost uses LASSO and Ridge regularization to prevent overfitting(low bias and high variance)
bestttt ...sir please make videos like this means in board....its better to understand this way
Firstly, I thank Krish for making really informative videos. For model 2 in the classification problem. If the error between the training and the test set is not drastically different, it may not be a result of high bias and high variance. In the real world scenario, we are unlikely to get a very high accuracy i.e >90%. I would consider a high bias and high variance prob as for example - Training acc - 75% and Test acc - 65%. What do you think?
It depends on the problem statement and domain of your data. If the data is clean and abundant, you can get a high accuracy above 90% too, for both test and train. It all depends on domain understanding.
Thanks a lot for the wonderful explanation
Insanely good video. Also this has amazing energy!
Thanks for this. Amazing explanation.
Love watching your video’s..You explain very well.
How do you know so much ? ?? You talk about machine language as if you are born embedded with all that knowledge ! God bless you with more knowledge and intelligence so that you will share with more people !
Mam even though I am studying AI in my clg probably this is easy to understand thanks man..
Very good. Revised my concepts perfectly 🔥🔥
You nailed it man ! Great work ! Respect your time and effort!
You make one of the best tech videos on youtube !!!!
brilliant video!!!!! explained everything to the point.
XGBoost - Low Bias and Low Varience
My understanding is if the model has high bias then it is underfit irrespective of variance (high or low)
for underfit model ( having high bias - high train error) - if the test error is closer to train then low varience... if the test error is larger the than train then high variance
Variance doesn't play a role in saying if a model is underfit or not....
Thanks for revising these important concepts
watched it oncce again for better clarity
.Thanks
Thank you so much bro ! So clear !!!
Tqsm Sir.Very Valuable Information
Awesome video, thank you so much for these wonderful explanations, they are much needed!
Great explanation. Thank you so much!
Good pedagogy and easy explanation. Thanks a lot
The best explanation among the whole youtube channels 👏. I love the way how you always keep things simple. Glad to find out about your channel, sir.
Very well explained. Thanks
Clear explanation. @krish sir thanks for making this video
Superb job sir... it’s the easiest explanation I’ve seen regarding this topic... hope you’ll upload video regarding Gradient Descent too :)
i agree with you