5 Key Points - Ridge Regression | Part 4 | Regularized Linear Models

Поділитися
Вставка
  • Опубліковано 21 лип 2024
  • In the final part of our Ridge Regression series we highlight 5 key points to solidify your understanding. Explore the essential takeaways that encapsulate the power and benefits of Ridge Regression, a valuable tool in the realm of regularized linear models.
    Code used: github.com/campusx-official/1...
    ============================
    Do you want to learn from me?
    Check my affordable mentorship program at : learnwith.campusx.in/s/store
    ============================
    📱 Grow with us:
    CampusX' LinkedIn: / campusx-official
    CampusX on Instagram for daily tips: / campusx.official
    My LinkedIn: / nitish-singh-03412789
    Discord: / discord
    E-mail us at support@campusx.in
    ⌚Time Stamps⌚
    00:00 - Intro
    00:46 - 5 Key Understandings about Ridge Regression
    02:11 - How the coefficients get affected?
    06:20 - Higher Values are impacted more
    10:26 - Impact on Bias variance TradeOff
    18:18 - Effect on the Loss Function
    25:05 - Why Ridge Regression is called so?
    29:23 - A Pratical Tip Apply Ridge Regression

КОМЕНТАРІ • 40

  • @siyays1868
    @siyays1868 Рік тому +8

    I m out of words. Thanku very much sir....! I m feeling awful watching such quality stuff in free! Waiting for debit card renewal. I got benefit from this channel hence I should contribute & i m going to in few months then i'll feel good.

  • @virajkaralay8844
    @virajkaralay8844 8 місяців тому +3

    This knowledge is worth thousands of dollars. Thank you so much Nitsh sir. I hope I get to repay you some time.

    • @rajsharma-bd3sl
      @rajsharma-bd3sl 7 місяців тому

      Buy his DSMP2.0 course and repay him ... simple bro

  • @abhaykumaramanofficial
    @abhaykumaramanofficial Рік тому +1

    Visualization ke karan understanding badhte ja rhi meri .....kya badhiya tarike se padhate ho aap gajab awesome...

  • @fashionvella730
    @fashionvella730 3 місяці тому +2

    One thing more that why our value of coefficient is reducing to zero is because of the location of lambda in a loss function equation if you will look at the loss function you will find out that term lambda is in the denominator of loss function as we know if we have bigger value in the denominator than the nominator our value is going to decrease. So, lambda in denominator is nominating when its value is bigger.

  • @balrajprajesh6473
    @balrajprajesh6473 2 роки тому +1

    Best Video ever!! Thank you sir.

  • @tanmaygupta8288
    @tanmaygupta8288 2 місяці тому

    sir! you are a gem. I am loving data science because of only u

  • @shashankbangera7753
    @shashankbangera7753 11 місяців тому

    what an explanation wonderful!

  • @stevegabrial1106
    @stevegabrial1106 3 роки тому +2

    Another great video, thx.

  • @shabak178
    @shabak178 3 роки тому +1

    Best... content... really thanks a lot

  • @saurabhbarasiya4721
    @saurabhbarasiya4721 3 роки тому +3

    Please upload videos regularly.

  • @kadambalrajkachru8933
    @kadambalrajkachru8933 2 роки тому

    In depth learning method... Thanks

  • @ParthivShah
    @ParthivShah 4 місяці тому +1

    Thank You Sir.

  • @parthshukla1025
    @parthshukla1025 2 роки тому

    Great Teaching Method Sir

  • @bangarrajumuppidu8354
    @bangarrajumuppidu8354 2 роки тому +3

    never seen this kind of explanationnn

  • @tanb13
    @tanb13 Рік тому +2

    Sir ji , just a gentle reminder by this comment, hard constraint and soft constraint ridge regression ke detail video reh gayi hai jo aapne promise kari thee is video mein

  • @sober_22
    @sober_22 Рік тому

    Seriously ,Your explanations are just WOWWWWWW.

    • @rajsharma-bd3sl
      @rajsharma-bd3sl 7 місяців тому

      so beautiful, so elegant , just looking like a wow

  • @nitinghumare8086
    @nitinghumare8086 Рік тому +6

    sir it is good for understanding .but write proper answer it is help for notes making.

    • @Ishant875
      @Ishant875 6 місяців тому +3

      Khud bhi karle bhai kuchh to

  • @umendchandra4731
    @umendchandra4731 2 роки тому

    Greatest video ever

  • @ali75988
    @ali75988 6 місяців тому

    if possible, kindly share the lecture on "hard constraint ridge regression" (as suggested in lecture)

  • @user-tq1bp1jb8k
    @user-tq1bp1jb8k Рік тому

    Sir kindly make a playlist on computer vision

  • @travellingtart5845
    @travellingtart5845 Рік тому

    Hey sir can u suggest best book for learning logic behind machine learning algorithms

    • @rajsharma-bd3sl
      @rajsharma-bd3sl 7 місяців тому

      Patterns recoginition using ML by Bishop

  • @ajaykushwaha-je6mw
    @ajaykushwaha-je6mw 2 роки тому +1

    Bear ever video as Take away for L2!

  • @mohitkushwaha8974
    @mohitkushwaha8974 Рік тому +2

    Doubt- So can i say- loss function is increasing on increasing lambda value???

    • @ronylpatil
      @ronylpatil Рік тому

      Same doubt

    • @TheAtulsachan1234
      @TheAtulsachan1234 Рік тому

      I think as we increase the lambda/alpha value, the Loss function converges towards zero. Please check ''Effect of Regularization on Loss Function'' section on this video. so with increasing lambda/alpha value, the loss/cost function decreases.

    • @casepoint10
      @casepoint10 Рік тому +3

      U-shaped curve shows that as lambda increases, the loss initially decreases (reducing overfitting) until it reaches a minimum point. After the minimum, further increasing lambda leads to an increase in the loss function (increasing underfitting).
      | \ /
      | \ /
      L | \ /
      o | \ /
      s | \ /
      s | \ /
      | \ /
      | \ /
      | \ /
      | \ /
      | \ /
      | \ /
      | \/
      ---------------------------------------> Lambda
      minimum loss

  • @stevegabrial1106
    @stevegabrial1106 3 роки тому +1

    After Day 53 Polynomial Day 54 video is missing or Day 55 1-4 include Day 54 video. plz comment it.
    thx.

  • @rohitdahiya6697
    @rohitdahiya6697 Рік тому

    why there is no learning rate hyperparameter in scikit-learn Ridge/lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.

    • @barryallen3051
      @barryallen3051 Рік тому

      sklearn provides 2 ways to implement ridge/lasso/E-net. First from sklearn.linear_model import Ridge/Lasso/ElasticNet and second through SGDRegressor with hyperparameter "penalty" (L1 for lasso and L2 for ridge). The first method uses a close form equation, so there is no iteration. Second method uses Gradient descent, thus iteration hyperparameters.
      I think you are mixing both.

    • @rohitdahiya6697
      @rohitdahiya6697 Рік тому

      @@barryallen3051 i know this point but my point is what is that hyperparameter max _iteration doing in normal ridge If it is using closed form solution as max_iteration means the epochs in SGD

    • @YogaNarasimhaEpuri
      @YogaNarasimhaEpuri Рік тому

      @@rohitdahiya6697 default solver is sag (use gradient descent)
      You need to specify, if you want to solver using OLS.
      I hope you got the point...

  • @ParthivShah
    @ParthivShah 4 місяці тому

    Thank You Sir.