Proving consistency of the OLS estimator

Поділитися
Вставка
  • Опубліковано 6 вер 2024

КОМЕНТАРІ • 18

  • @YAH00123321
    @YAH00123321 Рік тому +1

    Thank you so, so much Steven. I didn't realise how little I understood in/consistence until this video - I'm not sure the topic could possibly be explained in a more clean, understandable and engaging way. Thank you.

  • @khayalethumakosi6678
    @khayalethumakosi6678 2 роки тому

    Your voice is sooooo smoooooooth! you are sooooooo clear. Thank you

  • @NozyLohan11
    @NozyLohan11 3 роки тому +1

    thank you so much! very concise and straight to the point explanation.

  • @yuemingshen2566
    @yuemingshen2566 7 місяців тому

    One question, for plim(f(A)) = f(plim(A)), do we need f to be a continuous function?

  • @satiSFY41
    @satiSFY41 3 роки тому

    thank you so much, Steven, I was so confused about this for the longest time

  • @SrishtiKumariIPM-Batch
    @SrishtiKumariIPM-Batch Рік тому +1

    Thank you so so so much!

  • @cbrucrew88
    @cbrucrew88 9 місяців тому

    question - to show that your estimator is not just consistent, but also unbiased, don't you need the stronger zero conditional mean assumption?

    • @stevenproud6167
      @stevenproud6167  9 місяців тому

      Yes - that's absolutely correct. To show that the OLS estimator is consistent , all it needs to be true is that cov(u, X)=0. However, for the slope estimator to be unbiased, you need that E(u|{X})=constant (for all coefficients to be unbiased, you need this to be equal to zero). The zero covariance condition automatically follows from the zero conditional mean, , the reverse is not true.

  • @giulialoura6622
    @giulialoura6622 9 місяців тому

    why ^B is equal to B1 + sum cov/var? From where it comes from? thanks :)

  • @anisahhamidahahmed5766
    @anisahhamidahahmed5766 3 роки тому

    thanks so much Steven !! x

  • @JessaEstrada1315
    @JessaEstrada1315 2 роки тому

    Thank you sir!

  • @doubleo2720
    @doubleo2720 2 роки тому +1

    Why did you put 1/n at 4:57 ?

    • @stevenproud6167
      @stevenproud6167  2 роки тому +1

      This is a little trick when using plims. I've divided top and bottom by n (which is equivalent to multiplying the fraction by 1) - in order to convert this into a sample mean.
      By the weak law of large numbers, the probability limit (plim) of a sample mean is equal to the expectation.

    • @doubleo2720
      @doubleo2720 2 роки тому

      @@stevenproud6167 That makes sense, thanks!

  • @mylord6960
    @mylord6960 2 роки тому

    life saving

  • @Melissa-ue7zc
    @Melissa-ue7zc 3 роки тому

    Thank U so much🤗🤗✨✨

  • @user-xm2jk2lo9v
    @user-xm2jk2lo9v 3 місяці тому +1

    ... Meanwhile me : Wtf is a plim 😂

    • @stevenproud6167
      @stevenproud6167  3 місяці тому +1

      A simple answer is that a plim (or Probability LIMit) of an estimator is what the estimator converges to as the sample size gets very large (tends towards infinity).
      A plim is slightly different from a normal limit, because rather than the estimator converging , we are looking at convergence with probability.
      Intuitively, if the probability limit of an estimator is equal to (say) 2, that means that the probability that the estimates you observe are equal to 2 converges to 1 as the sample tends towards infinity.