Ridge Regression Part 3 | Gradient Descent | Regularized Linear Models

Поділитися
Вставка
  • Опубліковано 3 чер 2021
  • In the third installment of our series, we delve into Ridge Regression with a focus on Gradient Descent. Explore how this optimization technique plays a crucial role in implementing Ridge Regression, a powerful form of regularized linear models.
    Code : github.com/campusx-official/1...
    Matrix Differentiation : www.gatsby.ucl.ac.uk/teaching/...
    Videos to watch:
    • Multiple Linear Regres...
    ============================
    Do you want to learn from me?
    Check my affordable mentorship program at : learnwith.campusx.in/s/store
    ============================
    📱 Grow with us:
    CampusX' LinkedIn: / campusx-official
    CampusX on Instagram for daily tips: / campusx.official
    My LinkedIn: / nitish-singh-03412789
    Discord: / discord
    E-mail us at support@campusx.in

КОМЕНТАРІ • 26

  • @krupal_patel
    @krupal_patel 2 місяці тому +1

    Best teacher with 0 haters.. best channel for ml..ds, dl, ai

    • @shivankjat5173
      @shivankjat5173 Місяць тому

      bro appko bhi khud se apply karne me dikkat aati he kya me khud se apply nhi kar pa raha hu

  • @kamrezgaming6916
    @kamrezgaming6916 3 роки тому

    Appreciate your effort, I loved way you expand

  • @BharatJain-bl5gw
    @BharatJain-bl5gw 3 роки тому

    Best channel for Data science aspirants ❤️❤️ GBU👍👍

  • @balrajprajesh6473
    @balrajprajesh6473 2 роки тому +1

    Thanks for this, sir! you are great.

  • @ParthivShah
    @ParthivShah 4 місяці тому +1

    Thank You Sir.

  • @stevegabrial1106
    @stevegabrial1106 3 роки тому +1

    thanks for ridge regression, see u all tomorrow.

  • @devendrasharma5567
    @devendrasharma5567 Рік тому

    acha se samjh me aa gya, mai pareshan the ka loss fun +reg. term me min of w kaise nikala jata hai jo ki aaj mujjhe samjh me aa gya

  • @rohitdahiya6697
    @rohitdahiya6697 Рік тому

    why there is no learning rate hyperparameter in scikit-learn Ridge/lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.

  • @prathameshkhatu3182
    @prathameshkhatu3182 Місяць тому +2

    Nitish Sir @CampusX @6:42 you multiplied right side by 1/2 but not the left side...I think it may not be mathematically correct...can you or anyone explain the mathematics maybe I am missing something...? also @12.35 2WtXtY ka derivative would be 4 XtY isn't it sir?

  • @hossain9410
    @hossain9410 12 днів тому

    How can i use regularization with supervised machine learning algorithmic??

  • @saurabhbarasiya4721
    @saurabhbarasiya4721 3 роки тому

    Thanks

  • @katw434
    @katw434 Рік тому

    Thanks Sir

  • @humerashaikh3618
    @humerashaikh3618 3 роки тому

    Sir pls explain svm regression

  • @saumyashah6622
    @saumyashah6622 3 роки тому +2

    Sir, I am very much happy that we are learning everything regularly. But sir I had a doubt. I have identified some niche topics in ML which are on sklearn API documentation. I don't know if they are important or not. Are we going to cover these:
    1. Unsupervised learning
    2. Manifold Learning
    3. Reinforcement Learning
    4. Discriminant Analysis
    5. Gaussian Process
    6. Multioutput, Multilabel classification
    7. Random projection
    8. Semi supervised learning

    • @campusx-official
      @campusx-official  3 роки тому +4

      Planning to cover 1,4,6 and 8. Will create separate playlist for 2 and 3

    • @saumyashah6622
      @saumyashah6622 3 роки тому

      @@campusx-official thanks for your reply.

    • @stevegabrial1106
      @stevegabrial1106 3 роки тому

      @@campusx-official Plz also create playlist for GNN, thx.

    • @ameerazam3269
      @ameerazam3269 2 роки тому +1

      @@campusx-official 8.34 (Y.T*X*W) but you write as (Y.T*W*X) please ...

  • @kindaeasy9797
    @kindaeasy9797 2 місяці тому +1

    Y^tXW hoga

  • @kindaeasy9797
    @kindaeasy9797 2 місяці тому

    Left side pr 2 balance kr lete sir

  • @aayushbisht9666
    @aayushbisht9666 11 місяців тому

    sir whats difference btw L=(ypred - y)^2 and L=(y-ypred)^2
    in multipe linear regression its 1st one so we got (yT -(XB)T)(y-XB)
    but in Ridge Regression its L=(y-ypred)^2 which results (XB-y)T (XB -y)
    does it matter or just sign changes

    • @abhasmalguri2905
      @abhasmalguri2905 8 місяців тому

      bro it is matrix multiplication, so we can't change order anytime unless we know they commute

    • @shreyasmhatre9393
      @shreyasmhatre9393 7 місяців тому +1

      L= ( yi - ŷi ) ²
      In matrix from
      L= ( y - Xw )ᵀ ( y - Xw )
      L= ( y - Xw )ᵀ ( y - Xw ) + || w || ²
      L= ( y - Xw )ᵀ ( y - Xw) + λ wᵀw
      L= ( yᵀ - wᵀ Xᵀ )( y - Xw) + λ wᵀw
      L= yᵀ y - wᵀ Xᵀ y - yᵀXw + wᵀXᵀXw + λ wᵀw
      As he told wᵀXᵀy and yᵀXw both are same
      L= yᵀy - 2(wᵀXᵀy) + wᵀXᵀXw + λ wᵀw
      this is same eqn he got

      Eg =
      (A-B)(C-D) = AC - BC -AD + BD ----- 1
      (B-A)(D-C) = BD - AD - BC + AC re-arrange (AC - BC -AD + BD) ---- 2
      Both eqn found out to be same