13. Regression

Поділитися
Вставка
  • Опубліковано 17 лис 2024

КОМЕНТАРІ • 49

  • @rafaellopezmartinez6200
    @rafaellopezmartinez6200 4 роки тому +19

    He just gives you much more than he has to. I understand people who comment that it is hard to follow him sometimes, but come on, you can watch the video as many times as you need and you can use some books also (not to mention that we should use more those old fashioned friends). For me, the best statistician available in youtube giving a high level lecture which combines a practical but theoretical approach.

  • @christianjimenez1877
    @christianjimenez1877 Рік тому +11

    Why on Earth don't they clean the blackboard?

  • @aungkyaw9353
    @aungkyaw9353 4 роки тому +8

    Such a great lecture! the best intermediate-level gold. Thanks.
    Please dont fall for the parts where he forgot. He always tries to convey things that might not be obvious just from passive-reading.

  • @김경환-p2z3c
    @김경환-p2z3c 3 роки тому +3

    wow how can this lecture be underestimated? watch his lecture from the beginning not just this part

  • @hxxzxtf
    @hxxzxtf 9 місяців тому +2

    🎯 Key Takeaways for quick navigation:
    00:29 🎯 *Regression is crucial in statistics for prediction, making it a fundamental topic.*
    01:25 📊 *Regression aims to predict one variable based on another, commonly denoted as y and x, respectively.*
    02:48 📈 *Linear regression assumes a relationship of the form y = a + bx, though points rarely align perfectly on a line due to random noise.*
    07:57 💡 *Linear regression is widely applicable, from predicting economic demand to drug response in medicine, even when the relationship between variables isn't fully understood.*
    09:17 🔄 *Variable transformations, like taking logs or squares, can help model nonlinear relationships in regression analysis.*
    13:50 🔍 *To find the best fit line in regression analysis, a function measuring the aggregate distance of points to the curve is minimized, typically using the least squares method.*
    20:14 📏 *The function measuring fit in regression analysis aims to minimize the vertical distance between points and the regression line, offering a quantitative measure of how well the model fits the data.*
    22:34 📊 *The method discussed doesn't assume Gaussian errors.*
    26:12 📉 *Derivatives are used to find the minimum of the expectation equation.*
    32:50 💡 *At least one variable needs to be centered for the covariance calculation.*
    36:03 🧮 *Covariance calculations yield coefficients for regression equations.*
    39:23 📈 *Only variance of x and covariance with y matter for least squares computations.*
    46:36 🔍 *Conditions for noise in regression: centered and uncorrelated with x.*
    47:30 📉 *Assumptions of zero covariance and mean zero for the error term in linear regression allow for estimation of parameters without stringent distributional assumptions.*
    48:28 🧮 *Weak assumptions like mean zero and covariance zero between error and predictor enable parameter estimation using methods like least squares regression.*
    49:27 📊 *In the absence of prior knowledge, statistical inference about parameters relies on data, estimating coefficients to best fit a model to observed data points.*
    50:45 🔍 *Statistical problem involves finding parameters of a model that best explain observed data, typically assuming a linear relationship.*
    52:35 📝 *Estimation of coefficients in linear regression involves replacing expectations with sample averages, a common statistical technique.*
    54:20 💡 *Replacing expectations with sample averages and then minimizing is equivalent to minimizing with expectations directly, offering flexibility in estimation methods.*
    57:27 🔄 *Different distance metrics in regression, like mean absolute deviation, yield different solutions but are less commonly used than least squares regression.*
    58:25 🛒 *Linear regression can incorporate multiple predictors to better explain the variability in the response variable.*
    01:00:19 📊 *Linear regression with multiple predictors involves estimating coefficients for each predictor to model the relationship with the response variable.*
    01:05:04 🤔 *In multiple linear regression, an intercept is included implicitly by augmenting the predictor matrix with a column of ones.*
    01:07:19 📉 *Least squares estimation in multiple linear regression minimizes the sum of squared errors between observed and predicted values to estimate the coefficients.*
    01:08:38 🔍 *The expectation of the noise terms should ideally be zero in regression analysis.*
    01:10:01 📉 *Solving regression equations can become complex due to multiple equations and unknowns, leading to a system of linear equations.*
    01:11:28 🧮 *Matrix notation simplifies solving linear systems in regression analysis, allowing for the formal representation of solutions.*
    Made with HARPA AI

  • @AshutoshPandey-mw4de
    @AshutoshPandey-mw4de 5 років тому +12

    Where is your camara man... what u r doing

  • @JP-re3bc
    @JP-re3bc 7 років тому +28

    What's up with the dirty blackboard?

    • @ElizaberthUndEugen
      @ElizaberthUndEugen 5 років тому +9

      The dirty blackboard is the least of the problems this lecture has :/

  • @HalfBloodOtter
    @HalfBloodOtter 5 років тому +11

    what is this surrealist camera work what are those dirty blackboards what world is thiss whattttt

    • @aputrv
      @aputrv 3 роки тому

      good synopsis

  • @javiercortes814
    @javiercortes814 4 роки тому +2

    Great lecture, but what is stated at 58:15 is quite incorrect. Estimating demand from price alone can be quite tricky due to endogeneity issues. That is, from price alone you cannot know whether you are actually estimating a demand or supply function since the points can go in either direction.

  • @ARIZABEST
    @ARIZABEST 8 місяців тому

    Why the derivative with respect to *a* and *b* doesn't affect the variable Y, I mean: it seems that X is threated as constant, but why Y remains inmutable?

    • @xuchuan6401
      @xuchuan6401 13 днів тому

      It's a function of a and b. All other things are constant.

  • @SSNewberry
    @SSNewberry 2 роки тому +1

    The camera needs work. :(

  • @chuchi3263
    @chuchi3263 6 місяців тому +1

    cameraman is high

  • @shikhartripathi2917
    @shikhartripathi2917 2 роки тому +1

    why such weird camera angle T T

  • @georgeivanchyk9376
    @georgeivanchyk9376 4 роки тому +11

    Normies in the comments can't understand MIT lectures, nothing incredible about this xD

  • @ElizaberthUndEugen
    @ElizaberthUndEugen 5 років тому +5

    skip from 26:00 to 35:00

    • @truthteller267
      @truthteller267 4 роки тому

      9 minutes of life wasted

    • @BorisBrodski
      @BorisBrodski 4 роки тому +1

      Not entirely wasted though. I learned a lesson too. I didn't spot the problem with differentiating substituted variable. Now I'm aware of it.

    • @freeeagle6074
      @freeeagle6074 Рік тому

      Exactly. Humans tend to learn better and memorize knowledge easier from mistakes.@@BorisBrodski

  • @ngocxuan7422
    @ngocxuan7422 2 роки тому

    Why expectation = [Y-(a+bx)]^2, I think i should be = [(a+bx) -Y ] ^2, or real value - predict value , maybe it same, so Can I use two formulas?. Can anyone explain it for me? Thanks

    • @alrshdn
      @alrshdn Рік тому

      Yes, it is the same.
      (7 - 5)^2 = 4
      (5 - 7)^2 = 4

  • @CaptainCalculus
    @CaptainCalculus 3 роки тому

    The camera movements are really jarring, and my eyesight isn't great so the white on grey writing is hard to read.

  • @ElizaberthUndEugen
    @ElizaberthUndEugen 5 років тому +8

    This guy makes so many mistakes and is on top of that really hard to follow...

    • @grade12mathematicsmadeeasy32
      @grade12mathematicsmadeeasy32 Рік тому +1

      Advanced Statistics is not any easy subject to teach at any level.Let's appreciate his knowledge of the subject first.Mistakes are normal.

  • @rainbow-kj3ks
    @rainbow-kj3ks 6 років тому +2

    Merci

  • @eepaul1981
    @eepaul1981 4 роки тому +3

    what a mess

  • @janel6803
    @janel6803 3 роки тому

    can anybody explain 26:30?

    • @iirolenkkari9564
      @iirolenkkari9564 3 роки тому +1

      He centers X and defines this as X_tilde (= X - E(X)). When he uses X_tilde in the formulation a + bE(X) + bX_tilde, just plug in the definition of X_tilde and bE(X)s will cancel out. So this is just a reformulation of the same thing (by adding and subtracting bE(X)).
      The motivation to use centered X is to simplify the calculations somewhat. You could just as well not center and end up in the same place. So different routes, same destination.

    • @amritborah2773
      @amritborah2773 2 роки тому

      @@iirolenkkari9564 thnx

  • @ElizaberthUndEugen
    @ElizaberthUndEugen 5 років тому +2

    30:56 ua-cam.com/video/RQKJBpaCCeo/v-deo.html
    43:30:
    cov(X, Y + X) = cov(X, Y) + cov(X, X)
    cov(X, Y + X) = E[X*(X+Y)] - E[X] E[X+Y]
    = E[X^2 + XY] - E[X] E[X+Y]
    = E[X^2] + E[XY] - E[X] * (E[X] + E[Y])
    = E[X^2] - E[X]^2 + E[XY] - E[X] E[Y]
    = cov(X, X) + cov(X, Y)

    • @chuzai2156
      @chuzai2156 4 роки тому

      So at 30:56, E(X tilde*Y) = Cov(X,Y) is wrong, is it? Cov(X,Y) = E[X tilde*(Y-E(Y))]

  • @jiayanchen7790
    @jiayanchen7790 3 роки тому +2

    Surprised that MIT prof made so many mistakes in basic algebra.

  • @sitotagetachew4392
    @sitotagetachew4392 3 роки тому

    poor video capturing

  • @Dave-lc3cd
    @Dave-lc3cd 4 роки тому +4

    This goes to show that you can be a brilliant person but a crap teacher.
    MIT or not I'm going to stick with Khan Academy.

  • @learn.withaman
    @learn.withaman 2 роки тому

    ❤️

  • @mahzadkhoshlessan9816
    @mahzadkhoshlessan9816 4 роки тому +5

    Oh my god. These lectures are horrible.

    • @BorisBrodski
      @BorisBrodski 4 роки тому +5

      You may jump to such (premature?) conclusion if you just watched this one lecture. But going through the entire course I'm very pleased. He has a humor and explain thinks very well emphasizing important points for the future and for the assignments.
      Cons are:
      - low volume (sound)
      - less problematic: dirty board

    • @CathyZhang
      @CathyZhang 3 роки тому +2

      @@BorisBrodskiI agree with you. He is teaching stats at a quite high level, which I find valuable.

    • @NoRa-ws8fo
      @NoRa-ws8fo 2 роки тому +1

      @@CathyZhang yep can't expect him to spoon feed students

    • @freeeagle6074
      @freeeagle6074 Рік тому +1

      If you have learned the prerequisite courses such as 18.06, 18.05, 18.600, and 18.100, you'll find this professor has given the best statistics lectures you can find online.

  • @thomaskim3270
    @thomaskim3270 3 роки тому +1

    He is one of the worse teachers. He needs to do better job of preping for the lecture or he just stop teaching.

    • @sebon11
      @sebon11 3 роки тому +1

      Tell us why you dumb

    • @thomaskim3270
      @thomaskim3270 Рік тому

      @@sebon11 Is it obvious he did not prepare for his lecture. If he did, he would not make so many mistakes.

    • @freeeagle6074
      @freeeagle6074 Рік тому

      Many professors lecture by copying formulas from their notes. It's not easy to make mistakes in that way. But professors focus on their notes and have little eye contact with students who will fall to sleep soon. By contrast, this professor keeps on having conversations with students relying on his memory. Students like such styles much better. Also, even if the professor has prepared for the lecture, it's still possible to make mistakes. To Err Is Human. During all those lectures, he made only a couple of mistakes, explained why they are mistakes, and then quickly corrected them. This makes it easier for students to understand concepts. Finally, the moment one stops complaining about free knowledge, one starts to learn.