5 Key Points - Ridge Regression | Part 4 | Regularized Linear Models
Вставка
- Опубліковано 21 лип 2024
- In the final part of our Ridge Regression series we highlight 5 key points to solidify your understanding. Explore the essential takeaways that encapsulate the power and benefits of Ridge Regression, a valuable tool in the realm of regularized linear models.
Code used: github.com/campusx-official/1...
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in/s/store
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
E-mail us at support@campusx.in
⌚Time Stamps⌚
00:00 - Intro
00:46 - 5 Key Understandings about Ridge Regression
02:11 - How the coefficients get affected?
06:20 - Higher Values are impacted more
10:26 - Impact on Bias variance TradeOff
18:18 - Effect on the Loss Function
25:05 - Why Ridge Regression is called so?
29:23 - A Pratical Tip Apply Ridge Regression
I m out of words. Thanku very much sir....! I m feeling awful watching such quality stuff in free! Waiting for debit card renewal. I got benefit from this channel hence I should contribute & i m going to in few months then i'll feel good.
This knowledge is worth thousands of dollars. Thank you so much Nitsh sir. I hope I get to repay you some time.
Buy his DSMP2.0 course and repay him ... simple bro
Visualization ke karan understanding badhte ja rhi meri .....kya badhiya tarike se padhate ho aap gajab awesome...
One thing more that why our value of coefficient is reducing to zero is because of the location of lambda in a loss function equation if you will look at the loss function you will find out that term lambda is in the denominator of loss function as we know if we have bigger value in the denominator than the nominator our value is going to decrease. So, lambda in denominator is nominating when its value is bigger.
I agree with your answer also
Best Video ever!! Thank you sir.
sir! you are a gem. I am loving data science because of only u
what an explanation wonderful!
Another great video, thx.
Best... content... really thanks a lot
Please upload videos regularly.
In depth learning method... Thanks
Thank You Sir.
Great Teaching Method Sir
never seen this kind of explanationnn
Sir ji , just a gentle reminder by this comment, hard constraint and soft constraint ridge regression ke detail video reh gayi hai jo aapne promise kari thee is video mein
toh khud padh le..
Seriously ,Your explanations are just WOWWWWWW.
so beautiful, so elegant , just looking like a wow
sir it is good for understanding .but write proper answer it is help for notes making.
Khud bhi karle bhai kuchh to
Greatest video ever
if possible, kindly share the lecture on "hard constraint ridge regression" (as suggested in lecture)
Sir kindly make a playlist on computer vision
Hey sir can u suggest best book for learning logic behind machine learning algorithms
Patterns recoginition using ML by Bishop
Bear ever video as Take away for L2!
very bear video
Doubt- So can i say- loss function is increasing on increasing lambda value???
Same doubt
I think as we increase the lambda/alpha value, the Loss function converges towards zero. Please check ''Effect of Regularization on Loss Function'' section on this video. so with increasing lambda/alpha value, the loss/cost function decreases.
U-shaped curve shows that as lambda increases, the loss initially decreases (reducing overfitting) until it reaches a minimum point. After the minimum, further increasing lambda leads to an increase in the loss function (increasing underfitting).
| \ /
| \ /
L | \ /
o | \ /
s | \ /
s | \ /
| \ /
| \ /
| \ /
| \ /
| \ /
| \ /
| \/
---------------------------------------> Lambda
minimum loss
After Day 53 Polynomial Day 54 video is missing or Day 55 1-4 include Day 54 video. plz comment it.
thx.
ua-cam.com/video/74DU02Fyrhk/v-deo.html
why there is no learning rate hyperparameter in scikit-learn Ridge/lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.
sklearn provides 2 ways to implement ridge/lasso/E-net. First from sklearn.linear_model import Ridge/Lasso/ElasticNet and second through SGDRegressor with hyperparameter "penalty" (L1 for lasso and L2 for ridge). The first method uses a close form equation, so there is no iteration. Second method uses Gradient descent, thus iteration hyperparameters.
I think you are mixing both.
@@barryallen3051 i know this point but my point is what is that hyperparameter max _iteration doing in normal ridge If it is using closed form solution as max_iteration means the epochs in SGD
@@rohitdahiya6697 default solver is sag (use gradient descent)
You need to specify, if you want to solver using OLS.
I hope you got the point...
Thank You Sir.