Correction: 13:39 I meant to put "Negative Log-Likelihood" instead of "Likelihood". A lot of people ask about 15:34 and how we are supposed to do Cross Validation with only one data point. At this point I was just trying to keep the example simple and if, in practice, you don't have enough data for cross validation then you can't fit a line with ridge regression. However, much more common is that you might have 500 variables but only 400 observations - in this case you have enough data for cross validation and can fit a line with Ridge Regression, but since there are more variables than observations, you can't do ordinary least squares. ALSO, a lot of people ask why can't lambda by negative. Remember, the goal of lambda is not to give us the optimal fit, but to prevent overfitting. If a positive value for lambda does not improve the situation, then the optimal value for lambda (discovered via cross validation) will be 0, and the line will fit no worse than the Ordinary Least Squares Line. Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
@VINAY MALLU To repeat what I wrote in the comment you replied to: A lot of people ask why can't lambda by negative. Remember, the goal of lambda is not to give us the optimal fit, but to prevent overfitting. If a positive value for lambda does not improve the situation, then the optimal value for lambda (discovered via cross validation) will be 0, and the line will fit no worse than the Ordinary Least Squares Line.
@VINAY MALLU The larger the dataset, the less likely you are to overfit the data. So in some sense, regularization becomes less important. However, Lasso (L1) regularization is still helpful for removing extra variables regardless of the size of the dataset. And even with very large datasets, ML algorithms that depend on weak learners benefit from regularization.
@@statquest Coming back to Vinay's question: In the counterexample he gives a negative lambda would not achieve a better fit to the training data, but prevent overfitting (in that case overfitting to a more shallow slope). I really liked the video and found most of it very intuitive, but the fact that ridge regression favours a more shallow slope is not. With a large set of predictors, it's easy to see that enforcing sparsity may provide better out-of-sample predictions in practice. But with a single predictor the prior assumption of 'the obsvered data tend to overestimate the influence of the predictor' seems no more justified than its opposite would be. In other words: under OLS assumptions the distribution of OLS fitted slopes will be symmetrically centered on the 'true' slope. But the example was really helpful to understand that ridge regression doesn't work that way and instead biases the fit towards the intercept.
@@cosworthpower5147 The goal is to reduce sensitivity to the parameters. The y-axis intercept does not depend on any of the parameters, so there's no reason to shrink it. Instead, as the other parameters go to 0, the intercept goes to the mean y-axis value.
I am a machine learning engineer at a large, global tech company with a decade of experience in industry and a computer science graduate student. Your channel has helped me immensely in learning new concepts for work and job interviews, and your videos are so enjoyable to watch. They make learning feel effortless! Thank you so much!!
@Son Of Rabat , some people (like me) might have skipped the "simple stuff" to jump right into the complex stuff because it gives better results. For example, I was introduced to ML by working with image classification and object detection right away, where deep learning is king. I studied backpropagation, gradient descent, etc, but never heard of Ridge Regression, for example, until recently. Now I'm trying to collect the pieces I left behind. (I also always sucked with the theoretical parts. As long as the evaluation metrics were good, it was fine... And it kind of worked for me, for some time. I'm now trying to change that, and deepen my theoretical knowledge.)
This channel is by far the best at explaining mathematical concepts related to machine learning. I'm in a machine learning class at my university and go to every class lecture. I leave not having understood an hour and fifteen minutes of lecture. I immediately pull up this channel and watch a video on the same concept and "BAM". It makes sense.
Professors in general teach Ridge Regression with many complicated equations and notations. You made this topic very clear and easy to understand. Thank u very much again.
Only Statquest can make someone emotional while learning statistics. The ease with which the concepts are flowing flawlessly into my brain makesme teary. Thank you so much 🥺❣
The way you go through the logic step by step makes you a good teacher. In many of my research occasions they just say "adjust your alpha higher or lower until you don't overfit / underfit" but I don't even know what am I looking at. Bless you.
I've spent so much time trying to read and understand what EXACTLY is ridge regression. This video made it much easier to understand. Thank you so much for simplifying this complex concept!
I don't know how my stat teacher can make something this easy to understand that complicated. Everytime I can't understand what he's talking about in the class I know that I have to turn to StatQuest. Thank you for what you're doing.
I am brand-new to statistics, and I'm in school to be a data scientist. so many times, I lose the plot watching lectures from my professors who have the Curse of Knowledge. I end up spending hours watching your videos and they help so much, I just don't even have words! I've recommended your channel to all my classmates--and I mentioned it so much, my professor is considering adding your channel to recommended materials for next semester! you are a shining light of joy in a jargon-filled sea of confusion.
I study data science too at a uni and his videos are helping me stay afloat in my statistical learning course. Not all heroes were capes and he's truly one of them!
@Linda Wallberg @Josh Sherfey @Lucas Possatti I don't see why we even use lambda, it doesn't seem to change anything 🤔, i'd understand if it were a value between 0-1 but not any>=0. Can someone please explain? Multiplying lamba (scalar) to slope² should only scale it in parallel direction right? We basically just take any smaller arbitrary slope (introduce bias) and that's all.
I have a big data economics exam tomorrow and you literally just saved my life. I don't always understand what my professor is trying to explain, but you did it super clearly. Actual life saver
I've taken 4 machine learning courses and always wondered what ridge regression was, because I've heard it several times, but I was never taught it. I never realized it was just adding the regularization parameter! Awesome! Thank you so much.
Hooray! I'm glad the video helped clear up a long standing mystery. As you've noticed, a lot of machine learning is about giving old things new names - which makes it a lot easier to understand than we might think at first.
I came to know about this channel 2 hours ago. Simple and Outstanding explanation. My aim is to watch each and every video. Loving your style of teaching. From India.
I came here to learn about ridge regression only to realize it's L2 regularization. Aside from this, StatQuest is simply amazing. I use it to brush up on theory before interviews.
It's true - I'm not sure why we call it Ridge Regression and not L2. Or the other way around. And, on top of that, why not pick a name that is easy to remember, like "Squared Regularization".
Incredibly clear explanation. I'm using your Machine Learning videos to study for my midterm for sure. It's so nice to know that these concepts aren't above my head after all.
You made learning this complicated topic (for me) a lot more fun than from reading from a textbook or from my own lecturer. Very entertaining too... Well done!
Amazing video, I have read many articles and watched many videos to understand the idea behind Ridge & Lasso Regression and finally you explained in the most simplest way, many thanks for your effort.
your explanations are insane... they're so easy to understand and literally capture the essence of the topic without being overly complicated! i've bingewatched so many of your videos ever since chancing upon your channel last night - i specially love the little jingles you add in at the start of your videos, they really add such a fun and personal touch~ thank you so so soo much, your channel has really helped me immensely!!!
This is my first video and I am so impressed by how you explain things!!! It is like my buddy from college will explain it to me in plain words. You rock StatQuest, I am a follower from now on!! Thank you
I don’t like to make comment often, but dude. What a waste of talent of you with this level of gifted talent on statistics. You should have been making million of dollars if u work in ibank or whatever. Thank you very much for your video. For a guy like me just want to enter data science field, u help us to achieve more than what u expect.
Greetings from Ukriane, Josh! I'd like to say thanks to you for even though we are in a difficult situation here, but your videos on machine learning techniques always help me comprehend topics of this field....i am grateful to you! Thank you so much!!!
Josh, I have been practicing data science since last 4 years and have used Ridge regression as well. But now I am feeling embarrassed after watching this explanation because before the video I only had half baked knowledge. You deserve a lot of accolades my friend :)
My lecturer explained this by just putting the equation in front of us on the slides. The maths is easy but I didn't understand the point or intuition behind behind adding a penalty. Now I do. Thank you.
I love you Stat quest. Your videos are better than any other stats resource I have come across, and I am actually understanding things now, which will help me do my job better. Please never stop making these excellent videos...
I study financial Technology at Imperial College Business School; I must say your content made the "Big Data in Finance" module damn easier to understand
Josh, even though I have just started Machine Learning and Data Science in my French Engineering "Grande Ecole", watching your videos just replaced most of the teachers I had met in my life. Great BAM my friend and thank you, just keep it up! You got a rare gift
Dear josh, when i get a job, ill buy an entire album, thanks for all these videos, they are super helpful for me to understand. I was not able to understand the purpose of regularization until i watched this video, i was always confused why are we adding penalty to error. got a load off my mind, again thanks a lot !
I am an aspiring to be data scientist.. Right now I feel lost with all the math, stats, machine learning and programming... I have been watching a lot of UA-cam videos and I came across your channel! I simply love it! I plan to watch all the videos. And let me just say I love the jokes and the silly songs
Really appreciate your videos. They are valuable for beginners. Easy to understand and easy to learn. Thanks for your good work. Greeting from a new PhD student.
4:58 "I usually try to avoid using Greek characters as much as possible" You are too kind and it is very true, lots of students start shaking once they saw Greek letter in an equation!🥶
Thank you for this video, it's so helpful! I can't believe, it's only 500 views. Please consider patreon account that people could thank for your work!
Thank you! I'll look into the patreon account. In the mean time you can support my channel through my bandcamp site - even if you don't like the songs, you can buy an album and that will support me. joshuastarmer.bandcamp.com/
Josh,you are the best,and you know this by now.Please help us with the video on why ridge regression works for datasets with lots of parameters and less data points
These videos are awesome! Somehow, listening to the video, I feel it comes from/for someone with a background in stats, than a typical computer science machine learning video.
Interesting. My background is both computer science and statistics - but I did biostatistics for years before I did machine learning, so that might explain it.
Small question: Does ridge regression only decrease sensitiveness ? What if instead of this example, our test set was above the red line ? Normally we'll need to increase sensitiveness ?
@@vishaltyagi2983 can you explain more? i am trying for an hour to proof it myself and reached that the random sample has less variance but that doesnt matter, because it doesnt differ. Then i found your reply.
its insane i keep coming back to this channel to brush up on material. I am finally graduating this summer but i know for sure i will coming back here just here "small Bam!" and "Bamm" lol
Great explanation as always. There is something it's not convincing me about this type of regression. The ridge regression assumes that the training data are always overestimating the slope. Isn't possible that the training data are underestimating the slope instead?
If the training data underestimate the slope, then shrinking it will not improve the fit during cross validation. In this case the best value for lambda will be zero. So ridge regression can’t make things worse. Does this make sense?
You are absolutely amazing and the videos are so insanely useful! If these videos were available 5 years ago, I would have skipped all my stat classes! : )
So from what I understand, ridge regression controls the slope from getting big right? This affects bias but reduces variance a lot so overall its better. But what if my true model has a slope that is actually bigger(steeper) than what I got using my training data? In that case wouldn't you be making the model worse by using regularization? In other words, why are we "desensitizing" when we don't know what the underlying model is? What if sensitivity in actual model is higher?
I have this exact same doubt! I guess we use trial and error and see whether the model improves, if it doesn't the only way to either use a more complex function or get more training data.
I think once you test all possible value of lambda, the one gives you the smallest test error will be the best one. So if true model is steeper (and assume test error gave you an approximation to true error) the lambda will reduce to zero.
Thank you! I'll look into the patreon account. In the mean time you can support my channel through my bandcamp site - even if you don't like the songs, you can buy an album and that will support me. joshuastarmer.bandcamp.com/
Josh, you're a true hero with your explanations. Thanks a bunch! I have one question though. In the video (in the graph at 19:20 for example) you show that a ridge regression would fit real world data better, as it shrinks the beta (the graph shows that in the real world this beta is also smaller, due to most green points (=real world data) being positioned below the red line (=training data)). However, would ridge regression still be better if for example most of the green dots would be above the red line? Because with ridge regression we would shrink the beta, while the real world beta in reality has even a higher slope than the slope of the red line (thus in this case ridge would lead to increase in both variance and bias for real world data?)
This is a great question - the key is that when lambda = 0, then you get the exact same result as least squares - so Ridge Regression can not do worse than Least Squares, it can on only do better. In the case you mention, sure, if all of the green dots are above the red dots, neither Least Squares or Ridge Regression will do well - but Ridge Regression will do no worse than Least Squares.
Thank you for posting this question. One thousand comments on this video, all well deserved praise as this video and the whole channel are awesome. Yet only you asked this obvious question. Makes me wonder how many people actually bothered to understand the whole point of Ridge Regression.
@@CyberSinke Exactly what shocked me too, i am trying for one hour to understand it by assuming sample variance underestimation of population but it doesnt matter, it is just the sample which picked randomly.
How would you do cross validation for the example @ 10:16 to determine lambda? For example, would you then take 10 random samples of 2 (out of 8) data points and try different lambda's (for example lambda 1-20) for each _individual_ sample? And then determine which value of lambda in all those 10 samples gives the lowest variance?
Firstly i like to thank you for explaining these concepts in such a crystal clear manner , this is one of the best video i ever witnessed. second, i request you to please make some video on backpropagation and some tedious concepts of M.L. once again thank you.
In practice, ridge regression starts with the least squares estimates for the slope and intercept. Then it changes the slope a little bit to see if the sum of the squared residuals plus lambda times the squared slope gets smaller. If so, keep the new value. Then make the slope a little smaller and see if the sum of squared residuals plus lambda times the squared slope gets smaller. If so, keep the new value. Repeat those steps over and over again until you the sum of the squared residuals plus lambda times the squared slope no longer gets smaller. Does that make sense?
@@statquest Hi Josh, the slope that you are referring to is just one of our parameters that we want to minimize right? For a higher order fitting, can it be any other parameter apart from slope?
@@utkarshkulshrestha2026 Least Squares will work to minimize the sum of the squared residuals using all of the parameters and the ridge regression will be applied to all parameters except for the intercept. Thus, for all parameters other than the intercept, we try to minimize the sum of the squared residuals plus the ridge regression penalty. Usually reducing the parameter values will increase the sum of the squared residuals a little bit and decrease the ridge regression penalty a lot. Does that make sense?
Correction:
13:39 I meant to put "Negative Log-Likelihood" instead of "Likelihood".
A lot of people ask about 15:34 and how we are supposed to do Cross Validation with only one data point. At this point I was just trying to keep the example simple and if, in practice, you don't have enough data for cross validation then you can't fit a line with ridge regression. However, much more common is that you might have 500 variables but only 400 observations - in this case you have enough data for cross validation and can fit a line with Ridge Regression, but since there are more variables than observations, you can't do ordinary least squares.
ALSO, a lot of people ask why can't lambda by negative. Remember, the goal of lambda is not to give us the optimal fit, but to prevent overfitting. If a positive value for lambda does not improve the situation, then the optimal value for lambda (discovered via cross validation) will be 0, and the line will fit no worse than the Ordinary Least Squares Line.
Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
@VINAY MALLU To repeat what I wrote in the comment you replied to: A lot of people ask why can't lambda by negative. Remember, the goal of lambda is not to give us the optimal fit, but to prevent overfitting. If a positive value for lambda does not improve the situation, then the optimal value for lambda (discovered via cross validation) will be 0, and the line will fit no worse than the Ordinary Least Squares Line.
@VINAY MALLU The larger the dataset, the less likely you are to overfit the data. So in some sense, regularization becomes less important. However, Lasso (L1) regularization is still helpful for removing extra variables regardless of the size of the dataset. And even with very large datasets, ML algorithms that depend on weak learners benefit from regularization.
@@statquest Coming back to Vinay's question: In the counterexample he gives a negative lambda would not achieve a better fit to the training data, but prevent overfitting (in that case overfitting to a more shallow slope). I really liked the video and found most of it very intuitive, but the fact that ridge regression favours a more shallow slope is not. With a large set of predictors, it's easy to see that enforcing sparsity may provide better out-of-sample predictions in practice. But with a single predictor the prior assumption of 'the obsvered data tend to overestimate the influence of the predictor' seems no more justified than its opposite would be. In other words: under OLS assumptions the distribution of OLS fitted slopes will be symmetrically centered on the 'true' slope. But the example was really helpful to understand that ridge regression doesn't work that way and instead biases the fit towards the intercept.
Is there an intuitive explanation, why the intercept beta 0 is not included in the regularization process?
@@cosworthpower5147 The goal is to reduce sensitivity to the parameters. The y-axis intercept does not depend on any of the parameters, so there's no reason to shrink it. Instead, as the other parameters go to 0, the intercept goes to the mean y-axis value.
I am a machine learning engineer at a large, global tech company with a decade of experience in industry and a computer science graduate student. Your channel has helped me immensely in learning new concepts for work and job interviews, and your videos are so enjoyable to watch. They make learning feel effortless! Thank you so much!!
Wow! Thank you very much! :)
can you give me a job plz?
@Son Of Rabat , some people (like me) might have skipped the "simple stuff" to jump right into the complex stuff because it gives better results. For example, I was introduced to ML by working with image classification and object detection right away, where deep learning is king. I studied backpropagation, gradient descent, etc, but never heard of Ridge Regression, for example, until recently. Now I'm trying to collect the pieces I left behind.
(I also always sucked with the theoretical parts. As long as the evaluation metrics were good, it was fine... And it kind of worked for me, for some time. I'm now trying to change that, and deepen my theoretical knowledge.)
Today, I also work for a global tech company (as a Data Scientist). Not for a decade though. 😅
@@LucasPossatti same for me. I work as DS at large tech company, but still learn a lot from SQ
This channel is by far the best at explaining mathematical concepts related to machine learning. I'm in a machine learning class at my university and go to every class lecture. I leave not having understood an hour and fifteen minutes of lecture. I immediately pull up this channel and watch a video on the same concept and "BAM". It makes sense.
BAM! :)
After watching dozens of StatQuest videos, I finally know when to say 'BAM!'
Bam! :)
🤣🤣🤣🤣🤣🤣🤣🤣
plz tell me when to say BAM
M still unable to understand 😭
I had to build a ML model to help me predict the proper times to say ‘BAM!’
BAM!
Explaining things at this complexity at this level of simplicity is a real skill! Awesome channel!
Thank you! :)
I have no words to express how good this lecture is.
Thanks!
Professors in general teach Ridge Regression with many complicated equations and notations. You made this topic very clear and easy to understand. Thank u very much again.
Thanks! :)
Only Statquest can make someone emotional while learning statistics. The ease with which the concepts are flowing flawlessly into my brain makesme teary. Thank you so much 🥺❣
Thanks!
wow!!
The way you go through the logic step by step makes you a good teacher. In many of my research occasions they just say "adjust your alpha higher or lower until you don't overfit / underfit" but I don't even know what am I looking at. Bless you.
Happy to help! :)
I've spent so much time trying to read and understand what EXACTLY is ridge regression. This video made it much easier to understand. Thank you so much for simplifying this complex concept!
Bam! :)
I don't know how my stat teacher can make something this easy to understand that complicated. Everytime I can't understand what he's talking about in the class I know that I have to turn to StatQuest. Thank you for what you're doing.
BAM! :)
YOU ARE THOUSANDS OF TIMES BETTER THAN MY PROF...CLEAR & SIMPLE. THANKSSSSS
Thanks! :)
I am brand-new to statistics, and I'm in school to be a data scientist. so many times, I lose the plot watching lectures from my professors who have the Curse of Knowledge. I end up spending hours watching your videos and they help so much, I just don't even have words! I've recommended your channel to all my classmates--and I mentioned it so much, my professor is considering adding your channel to recommended materials for next semester! you are a shining light of joy in a jargon-filled sea of confusion.
Thank you so much and good luck with your coursework! :)
I study data science too at a uni and his videos are helping me stay afloat in my statistical learning course. Not all heroes were capes and he's truly one of them!
@Linda Wallberg @Josh Sherfey @Lucas Possatti I don't see why we even use lambda, it doesn't seem to change anything 🤔, i'd understand if it were a value between 0-1 but not any>=0. Can someone please explain? Multiplying lamba (scalar) to slope² should only scale it in parallel direction right? We basically just take any smaller arbitrary slope (introduce bias) and that's all.
@@shivanit148 No, we don't take an arbitrary smaller slope. We find the one slope that minimizes the SSR + penalty
I have a big data economics exam tomorrow and you literally just saved my life. I don't always understand what my professor is trying to explain, but you did it super clearly. Actual life saver
How was the exam?
I’m more concerned that this “literally saved their life”.
Whenever I feel some concept in ML, DS is not easily understood, I come to this channel because you explain it in a simple way with good examples.
Thank you!
I've taken 4 machine learning courses and always wondered what ridge regression was, because I've heard it several times, but I was never taught it. I never realized it was just adding the regularization parameter! Awesome! Thank you so much.
Hooray! I'm glad the video helped clear up a long standing mystery. As you've noticed, a lot of machine learning is about giving old things new names - which makes it a lot easier to understand than we might think at first.
I came to know about this channel 2 hours ago. Simple and Outstanding explanation. My aim is to watch each and every video.
Loving your style of teaching.
From India.
Thank you very much! :)
I came here to learn about ridge regression only to realize it's L2 regularization. Aside from this, StatQuest is simply amazing. I use it to brush up on theory before interviews.
It's true - I'm not sure why we call it Ridge Regression and not L2. Or the other way around. And, on top of that, why not pick a name that is easy to remember, like "Squared Regularization".
Incredibly clear explanation. I'm using your Machine Learning videos to study for my midterm for sure. It's so nice to know that these concepts aren't above my head after all.
Nice!! Good luck on your mid terms!
Level of simplicity on this channel is just BAM!!!
Thank you! :)
You made learning this complicated topic (for me) a lot more fun than from reading from a textbook or from my own lecturer. Very entertaining too... Well done!
Glad it was helpful!
Amazing video, I have read many articles and watched many videos to understand the idea behind Ridge & Lasso Regression and finally you explained in the most simplest way, many thanks for your effort.
Glad it was helpful!
You have that ability to explain difficult topics in a very simple way, this is amazing! Thank you so much
Thank you! :)
People like you, makes world a better place … thanks for being you ...
Thank you!!! :)
your explanations are insane... they're so easy to understand and literally capture the essence of the topic without being overly complicated! i've bingewatched so many of your videos ever since chancing upon your channel last night - i specially love the little jingles you add in at the start of your videos, they really add such a fun and personal touch~ thank you so so soo much, your channel has really helped me immensely!!!
Wow, thank you!
Yes same for me!
Clearly explained is an understatement, it is the saturated BAM!!!
Thank you very much! :)
This is my first video and I am so impressed by how you explain things!!! It is like my buddy from college will explain it to me in plain words. You rock StatQuest, I am a follower from now on!! Thank you
Awesome! Thank you!
I don’t like to make comment often, but dude. What a waste of talent of you with this level of gifted talent on statistics. You should have been making million of dollars if u work in ibank or whatever. Thank you very much for your video. For a guy like me just want to enter data science field, u help us to achieve more than what u expect.
Thank you so much 😀!
You just spoon feed my brain with your clear explanation, thanks man!
BAM!
Greetings from Ukriane, Josh! I'd like to say thanks to you for even though we are in a difficult situation here, but your videos on machine learning techniques always help me comprehend topics of this field....i am grateful to you! Thank you so much!!!
Wow! I can't imagine trying to learn ML in your situation, but I'm happy that I can help in some way.
Josh, I have been practicing data science since last 4 years and have used Ridge regression as well. But now I am feeling embarrassed after watching this explanation because before the video I only had half baked knowledge. You deserve a lot of accolades my friend :)
Awesome! I'm glad the videos are helpful. :)
I looked out for 3-4 videos before this. But this one was the best in term of explanation and very easily understood. Thanks!
Glad it was helpful!
I love your videos. They are so easy to follow and understand complicated concepts and procedures! Thanks for sharing all of the brilliant ideas!
Awesome! Thank you! :)
You are a real man, when you said it is clearly explained, it is clearly explained.
Mohamed from Syria
Thank you! :)
BAM! The concepts are presents in the clearest way ever.
Thank you! :)
Didn't even realized this StatQuest video is super long until you mentioned it, truly enjoy your way to explain, thanks))))))))
Hooray! I'm glad you liked it. :)
The lecture was at a whole different level.....thank you for such amazing content dear Josh
Thanks!
Thank you, Josh, you made the ML and stat easy and enjoyable. Hands down better than most stat prof.
Thank you very much! :)
your channel deserves more recognition, Keep up the good work
Thank you! :)
If there's a noble prize for good stats teacher on yt...give this guy one...
Thanks!
My lecturer explained this by just putting the equation in front of us on the slides. The maths is easy but I didn't understand the point or intuition behind behind adding a penalty. Now I do. Thank you.
I'm glad the video was helpful. :)
Man, love the sarcasm in your voice and the concise / crisp explanation of your concepts! DOUBLE BAMMMM!
Glad you liked it!
Thank you, Josh, for another fun StatQuest! I really enjoyed learning the use and benefits of Ridge Regression!
BAM!
Can't say how much I love you!! God please make sure this channel is always here❤
Thank you! :)
Probably the most sensible explanation available on youtube..and yes...BAM!! ;)
Wow, thanks!
I love you Stat quest. Your videos are better than any other stats resource I have come across, and I am actually understanding things now, which will help me do my job better. Please never stop making these excellent videos...
Thank you so much! And thank you for your support! I hope to make videos for the rest of my days (which I hope are many!). :)
Your channel is a god send!
Thank you! :)
That's so true!
then type qurdriple bam
I study financial Technology at Imperial College Business School; I must say your content made the "Big Data in Finance" module damn easier to understand
Hooray! I'm glad my videos are helpful! :)
wow. seriously better explained than lectures from my professor in the data science department
Thanks! :)
i know, right?
Wow, you are my personal Lifesaver. Didnt understand the concepts of Ridge Regression in any other source
Glad I could help! :)
Josh, even though I have just started Machine Learning and Data Science in my French Engineering "Grande Ecole", watching your videos just replaced most of the teachers I had met in my life. Great BAM my friend and thank you, just keep it up! You got a rare gift
Thank you so much! I'm so happy to hear that my videos are helpful! :)
StatQuest with Josh Starmer Even French people rely on you and are looking forward to studying your next videos ;)
Hooray!
lol tu dois avoir des très mauvais profs du coup, c'est quelle école?
Thanks for this awesome explanation. This is the first time I really understood how ridge regression works.
Hooray!!!! :)
This is incredibly helpful!! I will be watching many of your videos to supplement my stats/data science studies :) Thank you!
Glad it was helpful!
I just keep coming back to you Josh! Thanks for your clear explanation.
Glad to hear it!
Thank you so much. You made this so much easier to understand than my professor. Really appreciate it
You're welcome! I'm glad to hear that the video was helpful. :)
Dear josh, when i get a job, ill buy an entire album, thanks for all these videos, they are super helpful for me to understand. I was not able to understand the purpose of regularization until i watched this video, i was always confused why are we adding penalty to error. got a load off my mind, again thanks a lot !
Thank you! :)
Just Brilliant!! Josh Starmer - You are a genius!
Thank you! :)
I am an aspiring to be data scientist.. Right now I feel lost with all the math, stats, machine learning and programming... I have been watching a lot of UA-cam videos and I came across your channel! I simply love it! I plan to watch all the videos. And let me just say I love the jokes and the silly songs
Thank you so much and good luck on your journey to learning Data Science! BAM! :)
@@statquest Thank you! :D
Thanks Josh! You’re absolutely the best 💪🏻
Thank you very much! :)
Really appreciate your videos. They are valuable for beginners. Easy to understand and easy to learn. Thanks for your good work. Greeting from a new PhD student.
Thanks and good luck with your PhD! :)
I really appreciate your videos! Keep up the good work.
your tutorial worth much more than my university ML course which is 5000 dollars one semester. must donate, keep going.
Wow, thanks!
This is so cool, it's almost like magic.
YES! :)
So far the best Video i ever saw for regression ... thanks Josh !!
Mega BAM!!!! Thank you
I can't wait to learn the next lesson
Hooray!!!! :) The next one, on Lasso Regression, should come out in the next week or so.
Yeah!, It's great. Thank you
4:58 "I usually try to avoid using Greek characters as much as possible" You are too kind and it is very true, lots of students start shaking once they saw Greek letter in an equation!🥶
:)
Thank you for this video, it's so helpful! I can't believe, it's only 500 views. Please consider patreon account that people could thank for your work!
Thank you! I'll look into the patreon account. In the mean time you can support my channel through my bandcamp site - even if you don't like the songs, you can buy an album and that will support me. joshuastarmer.bandcamp.com/
Wow! Such a simple yet detailed exposition!
Thank you!
absolutely amazing, thank you sir!
Glad you liked it!
StatQuest - you are awesome! You’re my go-to source to learn stats when my textbooks fail me.
Hooray!!! :)
Love from India. Wish me good luck interview in less than days.
Thank you and good luck with your interviews. Let me know how they go. :)
@@statquest narrator: they never did let StatQuest know...
@@Whoasked777 Totally! I hope they went well.
You know what? Your video is so.... PERFECT.
Thank you! :)
I was listening with extreme focus and you suddenly threw "Airspeed of Swallow" at me. I died XDDDDDDDDDDDD
Awesome! :)
what do you mean, African or European Swallow
Josh,you are the best,and you know this by now.Please help us with the video on why ridge regression works for datasets with lots of parameters and less data points
I'll keep it in mind.
These videos are awesome!
Somehow, listening to the video, I feel it comes from/for someone with a background in stats, than a typical computer science machine learning video.
Interesting. My background is both computer science and statistics - but I did biostatistics for years before I did machine learning, so that might explain it.
now i'm fluent to use bam even the triple bam word, thankyou legend !
bam! :)
Quadruple bam!!!! For your explanation
Hooray! I'm glad you like it! :)
Never stop teaching sir... U r the best
Small question: Does ridge regression only decrease sensitiveness ? What if instead of this example, our test set was above the red line ? Normally we'll need to increase sensitiveness ?
This will be taken care of... if you are taking a random sample ... don't worry
Did you understand why?
@@vishaltyagi2983 can you explain more? i am trying for an hour to proof it myself and reached that the random sample has less variance but that doesnt matter, because it doesnt differ. Then i found your reply.
its insane i keep coming back to this channel to brush up on material. I am finally graduating this summer but i know for sure i will coming back here just here "small Bam!" and "Bamm" lol
Congratulations! BAM! :)
Great explanation as always. There is something it's not convincing me about this type of regression. The ridge regression assumes that the training data are always overestimating the slope. Isn't possible that the training data are underestimating the slope instead?
If the training data underestimate the slope, then shrinking it will not improve the fit during cross validation. In this case the best value for lambda will be zero. So ridge regression can’t make things worse. Does this make sense?
@@statquest yes it's clear. Thank you for your explanation.
I also had same question. Thankfully, I found your comment!
this is the best content i have ever seen on machine learning triple baam.
Thank you very much! :)
you are my sunshine,my only sunshine , you make me happy when f**king math puzzled me!
You are absolutely amazing and the videos are so insanely useful! If these videos were available 5 years ago, I would have skipped all my stat classes! : )
Thank you so much! :)
So from what I understand, ridge regression controls the slope from getting big right? This affects bias but reduces variance a lot so overall its better.
But what if my true model has a slope that is actually bigger(steeper) than what I got using my training data? In that case wouldn't you be making the model worse by using regularization?
In other words, why are we "desensitizing" when we don't know what the underlying model is? What if sensitivity in actual model is higher?
I have this exact same doubt! I guess we use trial and error and see whether the model improves, if it doesn't the only way to either use a more complex function or get more training data.
@@sidsr oh okay.. but still regularization works pretty much everytime right
I think once you test all possible value of lambda, the one gives you the smallest test error will be the best one. So if true model is steeper (and assume test error gave you an approximation to true error) the lambda will reduce to zero.
by trial and error, your model will get the best performance when lambda=0, which means "no regularizer used".
You are literally a LIFE SAVER!! Thank you sosososo much
Thanks!
Who's watching this the day before their machine learning finals?
me lol
Meeee
I have my midterm tomorrow
Your videos are so underrated. Please have a patreon account so that community can help you bring these high quality videos.
Thank you! I'll look into the patreon account. In the mean time you can support my channel through my bandcamp site - even if you don't like the songs, you can buy an album and that will support me. joshuastarmer.bandcamp.com/
Josh, you're a true hero with your explanations. Thanks a bunch!
I have one question though. In the video (in the graph at 19:20 for example) you show that a ridge regression would fit real world data better, as it shrinks the beta (the graph shows that in the real world this beta is also smaller, due to most green points (=real world data) being positioned below the red line (=training data)).
However, would ridge regression still be better if for example most of the green dots would be above the red line? Because with ridge regression we would shrink the beta, while the real world beta in reality has even a higher slope than the slope of the red line (thus in this case ridge would lead to increase in both variance and bias for real world data?)
This is a great question - the key is that when lambda = 0, then you get the exact same result as least squares - so Ridge Regression can not do worse than Least Squares, it can on only do better. In the case you mention, sure, if all of the green dots are above the red dots, neither Least Squares or Ridge Regression will do well - but Ridge Regression will do no worse than Least Squares.
Thank you for posting this question. One thousand comments on this video, all well deserved praise as this video and the whole channel are awesome. Yet only you asked this obvious question. Makes me wonder how many people actually bothered to understand the whole point of Ridge Regression.
@@CyberSinke Exactly what shocked me too, i am trying for one hour to understand it by assuming sample variance underestimation of population but it doesnt matter, it is just the sample which picked randomly.
@@statquest why not it will not do worse? it will make the slope flatter which is away from the real relation which is more vertical or steeper.
@@Niglnws It will do no worse because we will compare it to the simple least squares fit. If it performs worse, we won't use it.
best video about Ridge ever !!!!! very clear and precise!
Wow, thanks!
How would you do cross validation for the example @ 10:16 to determine lambda? For example, would you then take 10 random samples of 2 (out of 8) data points and try different lambda's (for example lambda 1-20) for each _individual_ sample? And then determine which value of lambda in all those 10 samples gives the lowest variance?
That's the idea. In practice, there are usually many more samples, so you're not just picking 2 samples at a time, but that's the idea.
@@statquest Thanks!
How to calculate that variance then?
Firstly i like to thank you for explaining these concepts in such a crystal clear manner , this is one of the best video i ever witnessed. second, i request you to please make some video on backpropagation and some tedious concepts of M.L.
once again thank you.
How to prove "the slop close to 0 when lambda increasing in the 9:42"?
I have the same question
when lambda tend to infinity, SSE will be negligible compared to lambda * slope^2, hence slope has to go to 0
hello sir i just wanted to tell you that you are the teacher ! thank you for your diamond cut clarification
Thank you very much! :)
This reminds me of L2 regularization of weights in neural networks.
Yes! This is the exact same thing, only applied to Regression. I think it appeared first in the regression context, but I'm not sure.
this is one super explanation of the Regularization concept of Ridge Regression. Great work.
Thank you! :)
Love from 🇵🇰 Pakistan.
Thank you! :)
you are the GOLD to the DS
Thank you! :)
How do we get the new line in 3:40 ? We calculated 1.69 and 0.74, what did we do with it to get the new line?
In practice, ridge regression starts with the least squares estimates for the slope and intercept. Then it changes the slope a little bit to see if the sum of the squared residuals plus lambda times the squared slope gets smaller. If so, keep the new value. Then make the slope a little smaller and see if the sum of squared residuals plus lambda times the squared slope gets smaller. If so, keep the new value. Repeat those steps over and over again until you the sum of the squared residuals plus lambda times the squared slope no longer gets smaller. Does that make sense?
@@statquest Hi Josh, the slope that you are referring to is just one of our parameters that we want to minimize right? For a higher order fitting, can it be any other parameter apart from slope?
@@utkarshkulshrestha2026 Least Squares will work to minimize the sum of the squared residuals using all of the parameters and the ridge regression will be applied to all parameters except for the intercept. Thus, for all parameters other than the intercept, we try to minimize the sum of the squared residuals plus the ridge regression penalty. Usually reducing the parameter values will increase the sum of the squared residuals a little bit and decrease the ridge regression penalty a lot. Does that make sense?
@@statquest Yes, this was pretty very much clear. Thank you..!!
@@statquest I mean the calculation^^That is what I am not quite sure about
This is not StatQuest.. this is Machine learning slayer! Damn! Another awesome video. Bravo bravo!
Thank you so much! I really appreciate it. :)