Thank you so, so much Steven. I didn't realise how little I understood in/consistence until this video - I'm not sure the topic could possibly be explained in a more clean, understandable and engaging way. Thank you.
Yes - that's absolutely correct. To show that the OLS estimator is consistent , all it needs to be true is that cov(u, X)=0. However, for the slope estimator to be unbiased, you need that E(u|{X})=constant (for all coefficients to be unbiased, you need this to be equal to zero). The zero covariance condition automatically follows from the zero conditional mean, , the reverse is not true.
This is a little trick when using plims. I've divided top and bottom by n (which is equivalent to multiplying the fraction by 1) - in order to convert this into a sample mean. By the weak law of large numbers, the probability limit (plim) of a sample mean is equal to the expectation.
A simple answer is that a plim (or Probability LIMit) of an estimator is what the estimator converges to as the sample size gets very large (tends towards infinity). A plim is slightly different from a normal limit, because rather than the estimator converging , we are looking at convergence with probability. Intuitively, if the probability limit of an estimator is equal to (say) 2, that means that the probability that the estimates you observe are equal to 2 converges to 1 as the sample tends towards infinity.
Thank you so, so much Steven. I didn't realise how little I understood in/consistence until this video - I'm not sure the topic could possibly be explained in a more clean, understandable and engaging way. Thank you.
Your voice is sooooo smoooooooth! you are sooooooo clear. Thank you
thank you so much! very concise and straight to the point explanation.
One question, for plim(f(A)) = f(plim(A)), do we need f to be a continuous function?
thank you so much, Steven, I was so confused about this for the longest time
Thank you so so so much!
question - to show that your estimator is not just consistent, but also unbiased, don't you need the stronger zero conditional mean assumption?
Yes - that's absolutely correct. To show that the OLS estimator is consistent , all it needs to be true is that cov(u, X)=0. However, for the slope estimator to be unbiased, you need that E(u|{X})=constant (for all coefficients to be unbiased, you need this to be equal to zero). The zero covariance condition automatically follows from the zero conditional mean, , the reverse is not true.
why ^B is equal to B1 + sum cov/var? From where it comes from? thanks :)
thanks so much Steven !! x
Thank you sir!
Why did you put 1/n at 4:57 ?
This is a little trick when using plims. I've divided top and bottom by n (which is equivalent to multiplying the fraction by 1) - in order to convert this into a sample mean.
By the weak law of large numbers, the probability limit (plim) of a sample mean is equal to the expectation.
@@stevenproud6167 That makes sense, thanks!
life saving
Thank U so much🤗🤗✨✨
... Meanwhile me : Wtf is a plim 😂
A simple answer is that a plim (or Probability LIMit) of an estimator is what the estimator converges to as the sample size gets very large (tends towards infinity).
A plim is slightly different from a normal limit, because rather than the estimator converging , we are looking at convergence with probability.
Intuitively, if the probability limit of an estimator is equal to (say) 2, that means that the probability that the estimates you observe are equal to 2 converges to 1 as the sample tends towards infinity.