Linear Regression

Поділитися
Вставка
  • Опубліковано 17 гру 2024

КОМЕНТАРІ • 102

  • @Nikhil-lq1kb
    @Nikhil-lq1kb 3 роки тому +31

    Error at 15:32
    Multiple linear regression: Y = B0 + B1*x1 + B2*x2 + ... + BP*xp + E
    Polynomial linear regression: Y = B0 + B1*x^1 + B2*x^2 + ... + BP*x^p + E

    • @kasyapvelampalli2811
      @kasyapvelampalli2811 2 роки тому +3

      Right.. I was confused here too! Because linear regression must always have deg=1 as opposed to what has been taught in the lect, eq. cannot have a polynomial degree of 'p'

    • @ram-pc4wk
      @ram-pc4wk Рік тому

      no , linearity is based on coeffiecents x terms in this case, not directly the x terms

    • @narmadaa2106
      @narmadaa2106 11 місяців тому

      Yes u r right
      It's polynomial regression

    • @narmadaa2106
      @narmadaa2106 11 місяців тому +1

      If the degree of x is more than1 it represents non linearity

  • @Creative_arts_center
    @Creative_arts_center 3 роки тому +12

    The best professor in machine learning.i like her teaching.i followed her from 2010 onwards.i had collected her lectures through CDs from 2010.i like her very much

  • @HA-bj5ck
    @HA-bj5ck Рік тому +3

    Very well explained...........This is GOLD ❤

  • @itsdurgeshray
    @itsdurgeshray 3 місяці тому

    CORRECTION at 16:24
    equation shall not have exponential degrees in power as it is for Polynomial Regression.

  • @lampfall7915
    @lampfall7915 2 роки тому +3

    She is wonderful teacher, respect to you

  • @shudeshna66
    @shudeshna66 7 років тому +4

    Having 1/2 as a multiplicative factor does not change the solution as what minimizes z also minimizes 1/2 z. 1/2 is usually added so that the derivative formula has a constant coefficient of 1.

  • @onataghoghoatikpe5989
    @onataghoghoatikpe5989 4 роки тому +3

    I am enjoying your courses. thanks

  • @mayanksj
    @mayanksj 7 років тому +16

    Machine Learning by Prof. Sudeshna Sarkar
    Basics
    1. Foundations of Machine Learning (ua-cam.com/video/BRMS3T11Cdw/v-deo.html)
    2. Different Types of Learning (ua-cam.com/video/EWmCkVfPnJ8/v-deo.html)
    3. Hypothesis Space and Inductive Bias (ua-cam.com/video/dYMCwxgl3vk/v-deo.html)
    4. Evaluation and Cross-Validation (ua-cam.com/video/nYCAH8b5AQ0/v-deo.html)
    5. Linear Regression (ua-cam.com/video/8PJ24SrQqy8/v-deo.html)
    6. Introduction to Decision Trees (ua-cam.com/video/FuJVLsZYkuE/v-deo.html)
    7. Learning Decision Trees (ua-cam.com/video/7SSAA1CE8Ng/v-deo.html)
    8. Overfitting (ua-cam.com/video/y6SpA2Wuyt8/v-deo.html)
    9. Python Exercise on Decision Tree and Linear Regression (ua-cam.com/video/lIBPIhB02_8/v-deo.html)
    Recommendations and Similarity
    10. k-Nearest Neighbours (ua-cam.com/video/PNglugooJUQ/v-deo.html)
    11. Feature Selection (ua-cam.com/video/KTzXVnRlnw4/v-deo.html )
    12. Feature Extraction (ua-cam.com/video/FwbXHY8KCUw/v-deo.html)
    13. Collaborative Filtering (ua-cam.com/video/RVJV8VGa1ZY/v-deo.html)
    14. Python Exercise on kNN and PCA (ua-cam.com/video/40B8D9OWUf0/v-deo.html)
    Bayes
    16. Baiyesian Learning (ua-cam.com/video/E3l26bTdtxI/v-deo.html)
    17. Naive Bayes (ua-cam.com/video/5WCkrDI7VCs/v-deo.html)
    18. Bayesian Network (ua-cam.com/video/480a_2jRdK0/v-deo.html)
    19. Python Exercise on Naive Bayes (ua-cam.com/video/XkU09vE56Sg/v-deo.html)
    Logistics Regession and SVM
    20. Logistics Regression (ua-cam.com/video/CE03E80wbRE/v-deo.html)
    21. Introduction to Support Vector Machine (ua-cam.com/video/gidJbK1gXmA/v-deo.html)
    22. The Dual Formation (ua-cam.com/video/YOsrYl1JRrc/v-deo.html)
    23. SVM Maximum Margin with Noise (ua-cam.com/video/WLhvjpoCPiY/v-deo.html)
    24. Nonlinear SVM and Kernel Function (ua-cam.com/video/GcCG0PPV6cg/v-deo.html)
    25. SVM Solution to the Dual Problem (ua-cam.com/video/Z0CtYBPR5sA/v-deo.html)
    26. Python Exercise on SVM (ua-cam.com/video/w781X47Esj8/v-deo.html)
    Neural Networks
    27. Introduction to Neural Networks (ua-cam.com/video/zGQjh_JQZ7A/v-deo.html)
    28. Multilayer Neural Network (ua-cam.com/video/hxpGzAb-pyc/v-deo.html)
    29. Neural Network and Backpropagation Algorithm (ua-cam.com/video/T6WLIbOnkvQ/v-deo.html)
    30. Deep Neural Network (ua-cam.com/video/pLPr4nJad4A/v-deo.html)
    31. Python Exercise on Neural Networks (ua-cam.com/video/kTbY20xlrbA/v-deo.html)
    Computational Learning Theory
    32. Introduction to Computational Learning Theory (ua-cam.com/video/8hJ9V9-f2J8/v-deo.html)
    33. Sample Complexity: Finite Hypothesis Space (ua-cam.com/video/nm4dYYP-SJs/v-deo.html)
    34. VC Dimension (ua-cam.com/video/PVhhLKodQ7c/v-deo.html)
    35. Introduction to Ensembles (ua-cam.com/video/nelJ3svz0_o/v-deo.html)
    36. Bagging and Boosting (ua-cam.com/video/MRD67WgWonA/v-deo.html)
    Clustering
    37. Introduction to Clustering (ua-cam.com/video/CwjLMV52tzI/v-deo.html)
    38. Kmeans Clustering (ua-cam.com/video/qg_M37WGKG8/v-deo.html)
    39. Agglomerative Clustering (ua-cam.com/video/NCsHRMkDRE4/v-deo.html)
    40. Python Exercise on means Clustering (ua-cam.com/video/qs7vES46Rq8/v-deo.html)
    Tutorial I (ua-cam.com/video/uFydF-g-AJs/v-deo.html)
    Tutorial II (ua-cam.com/video/M6HdKRu6Mrc/v-deo.html )
    Tutorial III (ua-cam.com/video/Ui3h7xoE-AQ/v-deo.html)
    Tutorial IV (ua-cam.com/video/3m7UJKxU-T8/v-deo.html)
    Tutorial VI (ua-cam.com/video/b3Vm4zpGcJ4/v-deo.html)
    Solution to Assignment 1 (ua-cam.com/video/qqlAeim0rKY/v-deo.html)

  • @tolifeandlearning3919
    @tolifeandlearning3919 2 роки тому +2

    Great lecture

  • @roseb2105
    @roseb2105 6 років тому +7

    maybe i am missing somthing here but are these lessons meant to be a review or just an overview of what will be taught? beacuse its hard to understand this if learning this the first time without much examples?

  • @RohitKumar-jh1km
    @RohitKumar-jh1km 7 років тому +66

    You people explain those things in details which doesn't require explanation. And those things which do require explanation you skip them as if they even doesn't need expatiation.

    • @dipanjanbiswas4924
      @dipanjanbiswas4924 6 років тому +3

      they copied from andrew NG's lectures

    • @akhandbha
      @akhandbha 6 років тому

      How do you know ?

    • @shivaniamehta9851
      @shivaniamehta9851 4 роки тому

      You can see this for clarification.
      medium.com/@nicolabernini_63880/ml-what-is-the-difference-between-gradient-descent-and-stochastic-gradient-descent-be79ab450ef0

    • @subashchandrapakhrin3537
      @subashchandrapakhrin3537 4 роки тому +1

      @@dipanjanbiswas4924 Is the Anderw NG father of ML ??? or the people who write the papers

    • @dipanjanbiswas4924
      @dipanjanbiswas4924 4 роки тому

      @@subashchandrapakhrin3537 you can say that

  • @abhyunnati8589
    @abhyunnati8589 Рік тому

    Superb

  • @theperson66
    @theperson66 10 місяців тому

    The best professor!! I love your classes, thank you for your hard work.

  • @jamesmathew8291
    @jamesmathew8291 Рік тому

    Excellently covered the topic. Which textbook reference ma'am

  • @chandureddim4327
    @chandureddim4327 4 місяці тому

    Anybody can help me why we need to assume that errors are independent to each other or mean as zero & has some standard deviation ? and as normally distributed ? please

  • @viral_baba
    @viral_baba 6 років тому +2

    Hello Prof
    the equations written on the blackboard are of polynomial regression but the slides contain equations of multivariate regression is it a mistake if it is please mention it in the annotation. if anyone knows the answer to my query respond to me freely.
    Thanks

  • @santoshkumargoutam4791
    @santoshkumargoutam4791 3 роки тому

    Mam :- Excellence concept clarification

  • @siddharthGupta632
    @siddharthGupta632 6 років тому +13

    Why you have written polynomial regression equation in place of multiple linear regression. This seems a bad lecture. Not expected from IIT

    • @sunny10528
      @sunny10528 4 роки тому

      Yes I too got stuck at this point in the lecture and started doubting my own knowledge

  • @jivanmainali1742
    @jivanmainali1742 4 роки тому +2

    Why objective function is 1/2 of sum square error. If we have n data set it should be average so I guess it is 1/n of sum of square error

    • @AkashCherukuri
      @AkashCherukuri 3 роки тому

      It's for making the mathematics easier since you would have to differentiate the function later. (1/2 gets canceled with the 2 which you get from differentiation, making equations and stuff a lot cleaner.)

  • @avinashdwivedi2015
    @avinashdwivedi2015 3 роки тому +6

    i was good at Linear regression and after watching this lecture.. i forgot everything about Regression. lol ironic

  • @ankursaxena4942
    @ankursaxena4942 4 роки тому

    Nice Video How to use #Linear_Regression in #Machine_Learning

    • @harisankar6104
      @harisankar6104 4 роки тому

      bro please help me, at 16:09 of this video she tells the equation of Y with square terms for p independent variable like in a polynomial regression but at the last section she tells for multi variables the equation is a linear function just like one for a multi variable regression, is this p independent variables are not multi variables?

  • @JMD_coding
    @JMD_coding 3 роки тому +1

    Sir/mam after completion of this course can i get any certificate
    Please Reply me

  • @madsudan9227
    @madsudan9227 6 років тому +1

    gives a brief overview ,Thanks for your efforts

  • @debarpitosinha1162
    @debarpitosinha1162 Рік тому

    Error in multiple linear regression formula formula should be x=b1+b2x2+b3x3+...............+bpxp

  • @harisankar6104
    @harisankar6104 4 роки тому

    please help me, at 16:09 of this video she tells the equation of Y with square terms for p independent variable like in a polynomial regression but at the last section she tells for multi variables the equation is a linear function just like one for a multi variable regression, is this p independent variables are not multi variables?

  • @Tchknow879
    @Tchknow879 Рік тому

    mam you teaches awesome but one thing that i suggest you can you improve your black board ,improve your camera so we can see clearly

  • @sunderrajan6172
    @sunderrajan6172 7 років тому +8

    Kind of confusing lecture - switching from single variable regression example to multi-variable. All explanation is in a rush. I was hoping that the examples are well explained. Having 1/2 in the equation, is it for half theta? I heard this is not important. When you compare Stanford or MIT Online lectures, lots of improvements needed.

    • @shirshak6738
      @shirshak6738 5 років тому +1

      for examples see tutorials

  • @anushamathur2019
    @anushamathur2019 3 роки тому +5

    polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.
    y = b0+b1x+b2x^2+...
    and you are calling that as multiple independent variables which relates to multiple linear regression not polynomial

    • @ashwinprasad5180
      @ashwinprasad5180 3 роки тому

      This is indeed a mistake , I presume. it should be y = b0 + b1x1 + b2x2 + ... + bpxp instead of rising to the power.

    • @shreyxsh5054
      @shreyxsh5054 3 роки тому +2

      @@ashwinprasad5180 yes! i took me 1 day. i rhought IIK KGP teachers must be right .... then i found MIT Andre ng one and now all is sorted.. thanks

  • @SHIVAMGUPTA-wb5mw
    @SHIVAMGUPTA-wb5mw 4 роки тому

    We started with question to find the parameter but never discussed on that....

    • @ashwinprasad5180
      @ashwinprasad5180 3 роки тому

      That is what the algorithm called gradient descent does, which she wrote at the end. It finds the parameters such that it reduces the loss function

  • @JMD_coding
    @JMD_coding 3 роки тому

    Shall we get any certificate after completion all video's

  • @wreckedinsect5069
    @wreckedinsect5069 4 роки тому

    my professor lectured fucking 3h and i understood nothing but linear is straight.. here in half and hour i am really ready for the exam, thanks

  • @regretsonly44
    @regretsonly44 Рік тому

    Queen 👑
    Amazing explanation

  • @sujitfulse8846
    @sujitfulse8846 7 років тому

    please explain the concept completely do not leave them in-between.

  • @rishabhpansari9963
    @rishabhpansari9963 5 років тому +4

    I think LMS is least mean square

    • @mrm371
      @mrm371 4 місяці тому

      Scope

  • @hiraksenroy691
    @hiraksenroy691 6 років тому

    Easy to interpret for statistics background..

  • @getfitwithakhil
    @getfitwithakhil 6 років тому +2

    Mam, you rushed towards the end of the lecture. The theory is more important as we have computers to do most of the calculations.

  • @shashu1999
    @shashu1999 6 років тому +1

    Copied J(theta) formula from Andrew Ng's module and didnt update the variables

  • @pankajkumarbarman765
    @pankajkumarbarman765 Рік тому

    great lecture ma'am . Thank you so much and happy teacher day, Pronam niben.

  • @sachinsd4663
    @sachinsd4663 6 років тому +4

    28:12 wtf was that?It sounded alien like and hilarious 😂

    • @mitrabb4812
      @mitrabb4812 6 років тому

      bro i was searching for this comment lmao !!!!

    • @sachinsd4663
      @sachinsd4663 6 років тому

      @@mitrabb4812dude I am glad someone noticed that shit.It is insane.

    • @mitrabb4812
      @mitrabb4812 6 років тому

      yea man big LOL

  • @rajasekharareddy6246
    @rajasekharareddy6246 6 років тому +5

    To understand this video , I think people must know the linear algebra . Then only they can understand this concept.

    • @AkashCherukuri
      @AkashCherukuri 3 роки тому

      The name is *Linear* Regression my man

  • @SwaroopSinghDeval
    @SwaroopSinghDeval 7 років тому +7

    Equation of multi-variable liner regression is wrong.

    • @solarstryker
      @solarstryker 7 років тому +1

      Swaroop Singh Deval yeah I think she misinterpreted the sub's as the powers

    • @premshankar5967
      @premshankar5967 6 років тому

      yes exactly

    • @vaibhavagrawal1856
      @vaibhavagrawal1856 5 років тому +2

      You are getting confused between Multivariate Linear Regression and Polynomial linear regression. The equation given here is not wrong. It is just a special case for multivariate, where every feature is a function of the 1st parameter. This is called as polynomial linear regression.

  • @roseb2105
    @roseb2105 6 років тому +3

    im very confused. and lost with these lectures

  • @harshitsingh480
    @harshitsingh480 5 років тому +1

    lms stands for least m,ean square not least minimum slope

    • @harshitsingh480
      @harshitsingh480 5 років тому

      Sorry for comma in between its mean not m,ean

  • @JMD_coding
    @JMD_coding 3 роки тому

    Mam can i get certificate

  • @rohitranjan5218
    @rohitranjan5218 3 роки тому

    How is she explaining the non linear equation as linear one, the equation should be linear and she has end up with non linear one. the numbers suffix notation has been written as power. 16- 19th minutes of the video.

  • @sauravprasad1996
    @sauravprasad1996 7 років тому +4

    directly skipped to LMS algo without explaining "how to learn the parameters " clearly ..! poor explanations !

  • @manyamittal6767
    @manyamittal6767 6 років тому

    Maybe split this lecture into two. It got really rushed at the end.

  • @sandeepkushwaha9790
    @sandeepkushwaha9790 6 років тому

    Now more confuse explanation is not good can any one share Good videos for linear regression with gradient descent

    • @a.yashwanth
      @a.yashwanth 6 років тому

      coursera's machine learning by stanford is good.

  • @saurabhchoudhary4572
    @saurabhchoudhary4572 6 років тому +4

    Ma'am please review your lectures before publishing, poor explanation and incorrect equation for multi linear regression.

    • @vaibhavagrawal1856
      @vaibhavagrawal1856 5 років тому

      You are getting confused between Multivariate Linear Regression and Polynomial linear regression. The equation given here is not wrong. It is just a special case for multivariate, where every feature is a function of the 1st parameter. This is called as polynomial linear regression.

  • @wolfisraging
    @wolfisraging 6 років тому +6

    Worst explanation of gradient descent in the world

  • @ritikraushan7392
    @ritikraushan7392 2 роки тому +1

    Kuch samajh me nahi aaya

  • @mahipalmahato7648
    @mahipalmahato7648 Рік тому

    7:25

  • @tararawat2955
    @tararawat2955 7 років тому +1

    Things are not being clearly explained. Its really unclear or confusing...atleast that example must be taken completely to understand the concept

  • @Man0fSteell
    @Man0fSteell 7 років тому +11

    There are UA-cam channels that provide better lectures or explanations in a simplified form than these IIT professors .
    Too bad our Indian quality of education/teaching (or whatever you wanna call) needs to improve a lot!! :(

  • @ishankulkarni3542
    @ishankulkarni3542 5 років тому

    Nahi samaj mei aa Raha hai.....jo PPT mei hai use hi phir se explain kar Rahi hai madam

  • @saptarshisanyal4869
    @saptarshisanyal4869 3 роки тому

    Sorry to say this maam , but this is wrong explanation of gradient descent algorithm and cost function. This tutorial was good till 25 minutes and after that it was total confusion.

  • @TheUnblameable22
    @TheUnblameable22 4 роки тому

    Surprised to see.. writing from the chit still making the basic equation itself wrong. The multiple linear regression is wrongly written. Assumptions are just copied and not explained.

  • @SubhamCreative.613kviews
    @SubhamCreative.613kviews 6 років тому

    nptel teach us very badly.........

  • @navedahmad5851
    @navedahmad5851 7 років тому

    proper explanation should be provided, the teacher is just rushing without explaining the concepts, this is not good.

  • @koppuprasanthkumar9211
    @koppuprasanthkumar9211 4 роки тому

    is this called an NPTL ....worst of time whatever the concepts need extra time you just skip them like anything.......i wont watch NPTL from now onwards ........why u r doing these NPTL certifications i don't know........and overall the title is not at all justified......we don't know how to learn the straight line using linear regression ...........don't watch and waste your time........find anyother resources.......

  • @s_sasmal
    @s_sasmal 6 років тому +4

    Can't imagine that Kids are preparing from there 8th standard to get into the IIT
    and after getting into the IIT they will get this kind of lecture.

  • @suddhasheel
    @suddhasheel 7 років тому

    Sorry to say this! But poor explanations by IIT standards. LMS, Batch gradient descent, and Stochastic descent would require more explanation.

  • @SandeepSharmaRhythmNGroove
    @SandeepSharmaRhythmNGroove 7 років тому

    not good explanation at all.