What is an unbiased estimator? Proof sample mean is unbiased and why we divide by n-1 for sample var

Поділитися
Вставка
  • Опубліковано 15 січ 2025

КОМЕНТАРІ • 166

  • @parasraina9470
    @parasraina9470 Рік тому +17

    I spent last 2 days trying to wrap around my head estimators and what it means to be unbiased. You explained me in minutes what I could not understand for days. I dont know how to thank you. You are the best. Thanks for the beautiful video

  • @nicholusmwangangi7960
    @nicholusmwangangi7960 2 роки тому +57

    This stuff was giving me nightmares 😫 but you've simplified it in the best way possible. Thank you 🇰🇪

  • @AlirezaSharifian
    @AlirezaSharifian 3 роки тому +27

    It is a very good video that simply describes some jargon which usually is ignored in the literature.
    Thank you.

  • @tatertot4810
    @tatertot4810 Рік тому +4

    Wow. Incredible. The best proof of sample variance on UA-cam. Thank you!

  • @faustinoeldelbarrio8967
    @faustinoeldelbarrio8967 2 місяці тому +3

    My mind is blown. Up until now all I had seen were simulations dividing by n Vs n-1, discussions on degrees of freedom etc. But this mathematical precise derivation is what I was looking for. THANK YOU.

  • @SunilSandal-gi9yn
    @SunilSandal-gi9yn Рік тому +6

    Amazing! I was chasing to understand the meaning of biased and unbiased, but this video explains in a very simple way and with great explanation too. Thank you so much for the details.

    • @Stats4Everyone
      @Stats4Everyone  Рік тому +1

      Yay! Happy to hear you found this video to be helpful :-)

  • @derinncagan
    @derinncagan Рік тому +3

    All of your videos are amazing!! As an Msc student I am checking out your videos for catch up and brushing up my informations. I am very happy to watch all of your videos they are clear and answering needs. Thanks!!

  • @SSCthanos
    @SSCthanos Рік тому +5

    How amazingly you have explained this complicated thing is just beyond articulation ! Thank you so much

    • @hrob6381
      @hrob6381 11 місяців тому +1

      Let's not get carried away

    • @SSCthanos
      @SSCthanos 11 місяців тому

      @@hrob6381 No, this was where I got stuck but this video cleared my doubts. So I am not getting carried away 😊

    • @hrob6381
      @hrob6381 11 місяців тому

      @@SSCthanos beyond articulation? Really?

    • @SSCthanos
      @SSCthanos 11 місяців тому

      Yes ofcourse, for days i was finding the explanation for the concept but just one day before my exam I encountered this video. Thus beyond articulation.

    • @hrob6381
      @hrob6381 11 місяців тому +1

      @@SSCthanos fair enough. Although you seem to be articulating it fairly well.

  • @anzirferdous5246
    @anzirferdous5246 Рік тому +2

    You are the Best. You definitely deserve a ton more views and subscribers.

  • @fanfan1184
    @fanfan1184 3 роки тому +9

    Your channel is criminally underrated! Most videos on this topic will simply "proof" this empirically or talk about degrees of freedom without connecting it to anything. This is the first in dozens of videos I found that actually provides mathematical proof! Your explanation was excellent! I got to say at this point it's not super intuitive for me why it's -1 (and not any other number to make the Variance larger), but I can appreciate how the math supports it.
    I just saw that you have tons of other videos on statistics and, if they are anything like this one, I know I will probably end up watching them and learning so much (=
    Thank you for putting in so much time and energy! And for sharing your amazing Knowledge!

  • @noneofyourbusiness9620
    @noneofyourbusiness9620 3 роки тому +3

    You are my personal hero for the month and probably the following months too cos I'm gonna start studying everything from your videos now

    • @Stats4Everyone
      @Stats4Everyone  3 роки тому +1

      Happy to hear you found my videos to be helpful :-)

  • @Qwevo8
    @Qwevo8 2 роки тому +3

    Am so happy I understood the concept. I found the finer details of the concept I was looking for.Thank you

  • @catcen9631
    @catcen9631 2 роки тому +1

    WOW! now we're taking! this is the best, literally the best! academic, clear, perfect! thank you so so much! maybe I put too many exclamation marks, but I mean it! THANKYOU THANKYOU THANKYOU

  • @yassine20909
    @yassine20909 2 роки тому +4

    I'm in a statistic / probability class this semester, which makes you, my new best friend 😁.
    Thank you for the great explanation 👍👏

  • @ycombinator765
    @ycombinator765 2 місяці тому +1

    I didn't know I will ever understand this... but here we are. Thanks!
    Also, it gets called Bessel's correction. There should be a more rigorous proof out there but this is more then enough for me. Thank you!

  • @biaralier7790
    @biaralier7790 Рік тому +3

    Thanks for breaking it down. and i mean the simple things like the meaning of an estimator. you the best ma'am.

    • @Stats4Everyone
      @Stats4Everyone  Рік тому

      Awesome! I'm happy to hear that you found this video to be helpful :-)

  • @kaylorzhang8959
    @kaylorzhang8959 Рік тому +1

    Thank you.Excellent teaching.

  • @hwyum97
    @hwyum97 11 місяців тому +2

    Thanks for clarification! One question here. Why var(X bar) equals to sigma^2 / n?

    • @Stats4Everyone
      @Stats4Everyone  10 місяців тому +2

      I have a video discussing this question here: ua-cam.com/video/XymFs3eLDpQ/v-deo.htmlsi=uWfZpTGzePAd22ju

  • @sriramnb
    @sriramnb 8 місяців тому +1

    Beautiful. Amazing. I was waiting to see this kind of an explanation. Thanks

  • @tahamahmood4220
    @tahamahmood4220 2 роки тому +1

    just subscribed your channel and recommends everyone reading this...

  • @moreenbundi8867
    @moreenbundi8867 3 роки тому +4

    This was very helpful and easy to understand. Thankyou so much

  • @fabiobiffcg4980
    @fabiobiffcg4980 10 місяців тому +1

    Finally, someone made it! Thanks!

  • @utsavmandal4843
    @utsavmandal4843 4 місяці тому +2

    Understood Maam! Great video

  • @cmrpancha5093
    @cmrpancha5093 Рік тому +1

    Nice explanation ❤

  • @CandidSpade1
    @CandidSpade1 Рік тому +1

    Perfect video! Thanks

  • @momotaakter7589
    @momotaakter7589 2 місяці тому +1

    I was wasting hours and hours behind the topic,then I found your vedio❤

    • @Stats4Everyone
      @Stats4Everyone  2 місяці тому

      So happy to hear that you found this video to be helpful!! :-)

  • @charleslevine9482
    @charleslevine9482 Місяць тому

    This was such a great video! Thank you!

  • @JoeM370
    @JoeM370 Рік тому +1

    This is meaningful material. A book I read on the same topic was a eureka moment for me. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell

  • @flaviusmiron6088
    @flaviusmiron6088 Рік тому +1

    Amazing explanation! Thank you so much!

  • @bertrandduguesclin826
    @bertrandduguesclin826 3 роки тому +2

    You demonstrate that Xbar is an unbiased estimator of mu without assuming that Xbar follows a normal distribution centered around mu with variance equal to sigma_square/n. However, to show that S_square is a biased estimator of the variance sigma_square, you do make this assumption since you substitute var(Xbar) with sigma_square/n (at 13:02). Would it be possible to do the demonstration without this assumption/substitution?

    • @Stats4Everyone
      @Stats4Everyone  3 роки тому +2

      Careful. Notice that I do not assume that the data is normally distributed in this video. I do not need the normality assumption for either proof in this video. Rather I use the definition of variance to find the variance of X-bar near minute 13.

    • @bertrandduguesclin826
      @bertrandduguesclin826 3 роки тому +3

      @@Stats4Everyone TYVM. From your answer and en.wikipedia.org/wiki/Standard_deviation#Standard_deviation_of_the_mean, I finally got it.

    • @Kerenr88
      @Kerenr88 11 місяців тому

      @@bertrandduguesclin826 Thank you so much for that link! I was confused in the same place...

  • @francisopio-gs4zz
    @francisopio-gs4zz Рік тому +1

    Good

  • @wangxuerui
    @wangxuerui 3 роки тому +1

    Such a good video, clear all my confusion about this topic, wish my professor can be half good as you.

  • @divvvvyaaaa
    @divvvvyaaaa Рік тому +1

    So well explained, thanks a ton

  • @albajasadur2694
    @albajasadur2694 2 місяці тому +1

    At 12:22, the green words, Var(xi)=E(xi^2) - [ E(xi) ]^2
    I don't get this part.

    • @Stats4Everyone
      @Stats4Everyone  2 місяці тому

      Great question! Here is a video that I hope you find helpful: ua-cam.com/video/551Q9QOKRns/v-deo.html

  • @nataliamora8344
    @nataliamora8344 2 роки тому +1

    Great, clear explanation! One small thing: On the computation done in color green and then color blue (around 12:44 and 13:44) I think you failed to carry down the square of mu. Meaning your final derivation was sigma^2 + mu where it should have been sigma^2 + mu^2

    • @TheTweedyBiologist
      @TheTweedyBiologist 2 роки тому +3

      I think she addressed it at 13:56

    • @Stats4Everyone
      @Stats4Everyone  Рік тому +1

      Yeah, I noticed it about 30 seconds later and corrected it in the video. Sorry for any confusion for that mistake!

  • @FunctioningAdult
    @FunctioningAdult 2 роки тому +1

    Thank you!!

  • @EmmanuelOTASOWIE
    @EmmanuelOTASOWIE 3 місяці тому +1

    Thank you for this well done

  • @WILLIAMARTANWIJAYA
    @WILLIAMARTANWIJAYA 2 місяці тому +1

    Very very helpful thank you

  • @rakeshkumarmallik1545
    @rakeshkumarmallik1545 2 роки тому +1

    Nice one, thanks for making such nice video on statistics

  • @aartvb9443
    @aartvb9443 2 роки тому +1

    Very clear explanation. Thank you!!

  • @GMW3939
    @GMW3939 2 роки тому +1

    Clear explanation, good work.

  • @fhoooooooood
    @fhoooooooood 2 роки тому +1

    Thank you you are so helpful!

  • @ashishprasadverma9428
    @ashishprasadverma9428 2 роки тому +1

    Hii Michelle ,thankyou for your wonderful and complete explanation

  • @mlfacts7973
    @mlfacts7973 Рік тому +1

    Great tutorial , thank you

  • @morancium
    @morancium Рік тому +1

    This was so COOOOOL !!!

  • @AdvaitGaurB22CS004
    @AdvaitGaurB22CS004 Рік тому +1

    amazing ma'am loved it

  • @rakeshkumar-nm6lm
    @rakeshkumar-nm6lm 2 роки тому +1

    Thank you

  • @churchilodhiambo9796
    @churchilodhiambo9796 Рік тому +1

    Very Wonderful 😢🎉❤
    God bless you soo much.

  • @Garrick645
    @Garrick645 7 місяців тому +1

    how did you express Var(x bar) in terms of expected value of (x bar square) and (expected value of x bar) square .
    Where can I read more theory about it.

  • @frult
    @frult 3 роки тому +4

    Clear really. Thanks!

  • @hongkyulee9724
    @hongkyulee9724 2 роки тому +1

    Wow,, Thank you for the wonderful video.

  • @kurienabraham8739
    @kurienabraham8739 2 роки тому +1

    At 13:00, you equate var ( x bar) with square of sigma divided by n. I cannot get my head around this step. How is variance of sample means same as population mean divided by sample size?

    • @Stats4Everyone
      @Stats4Everyone  2 роки тому

      Great question! Thank you Kurien Abraham for this post. Here is a video I made to try to address this question: ua-cam.com/video/XymFs3eLDpQ/v-deo.html
      Please let me know if you have any follow-up questions :-)

  • @KO-lm6wh
    @KO-lm6wh Рік тому +1

    Amazing explanation❤

  • @danielsolorioparedes5866
    @danielsolorioparedes5866 3 роки тому +2

    BEST VIDEO EVER! THANK U SO MUCH!

  • @keithgoldberg2298
    @keithgoldberg2298 7 місяців тому

    Great explanation! Thank you.

  • @manishchauhan5625
    @manishchauhan5625 Рік тому +1

    You are amazing....thanks for this video

  • @EzraJeremiah-cl7ub
    @EzraJeremiah-cl7ub Місяць тому

    Am impressed of you

  • @sherlock4811
    @sherlock4811 3 роки тому +1

    Thanks a lot for the video! Very clear and precise!

  • @Yaara_1
    @Yaara_1 4 місяці тому +1

    Thank you so much!!!!!

  • @-ul7lh
    @-ul7lh Рік тому +1

    amazing

  • @joypaul1976
    @joypaul1976 2 роки тому

    7:06 you said the expression is divided by n-1 to get the unbiased estimator. Will that work for any other number?

  • @Surya_Kiran_K
    @Surya_Kiran_K 7 місяців тому

    Wow thank you so much for your explanation
    Im really so glad that you use different colors for deriving something out of the main problem ❤
    It helps us to understand better
    💓Again Thank you so much😄

  • @MoinulHossain-rw2ry
    @MoinulHossain-rw2ry 9 місяців тому

    Thanks a lot. Love from Bangladesh. You have a great voice and accent too.

  • @AAnonymouSS1
    @AAnonymouSS1 Рік тому +2

    Finally got it ❤️

  • @gulzameenbaloch9339
    @gulzameenbaloch9339 Рік тому +1

    Thank you so much😊

  • @sakib_32
    @sakib_32 Рік тому

    Please more videos on Statistical inferences

  • @MariahYedidiah
    @MariahYedidiah 3 місяці тому +1

    why is var(x bar)=sigma ^2 /n

    • @Stats4Everyone
      @Stats4Everyone  3 місяці тому

      Great question! Thank you for this post. Here is a video where I discuss this: ua-cam.com/video/XymFs3eLDpQ/v-deo.html

  • @vivi412a8nl
    @vivi412a8nl 3 роки тому +3

    At around 5:11, after pulling the 1/n and the Sigma out, you said that E(xi) = Miu (the true mean of the population). But xi as you said in the beginning was an observation that we chose randomly, ie. it's a specific value (like a number), and so shoudn't the expected value of a number be itself (E(xi) = xi)? How could it be the mean of the population? Could someone help me to understand that part?

    • @Stats4Everyone
      @Stats4Everyone  3 роки тому

      Good question. Thanks for this post. The mean of the random variable xi is always mu, regardless of i. This is an assumption for the proof. If I were to observe several random values of x (obtain a sample), those values would be coming from the same population where the mean of x values is mu.

    • @MattSmith-il4tc
      @MattSmith-il4tc 3 роки тому

      Michelle is correct. It's true that E(xi)=xi for all numbers xi, but your mistake (and it's a common one) is that xi is not a number. It is a random variable that will result in some number after a chance process. The mean of the random variable xi is the population mean mu.

    • @timetravelerqc
      @timetravelerqc 2 роки тому

      @@MattSmith-il4tc Do you mean that if we treat the xi in E(xi) is a random variable, that means that single xi varies and the expected value of this single sample is the population mean mu?

  • @guangzexia
    @guangzexia 3 роки тому +1

    Hi Michelle, thanks for your work! But I still have some qustions. At 13:44, you substituted E(xi2) with sigma2 and miu2. I don't think you can do that. Because the xi in var(xi) = E(xi2)-(E(xbar))2 is the value from the whole population, but xi in equation (∑E(xi2)-nE(xbar2)/n) is the value taken from the sample. So, the sigma in equation E(xi2)=miu+sigma2 means the sigma of our sample, rather than the whole population.

    • @vrishabshetty1325
      @vrishabshetty1325 2 роки тому

      Mostly its given that E(xi) = myu
      That means for any Xi regardless of where it is from its E(Xi) is myu

    • @ritulahkar8549
      @ritulahkar8549 Рік тому

      i think, many people explain this by interchanging X for both. It will be better if they use different variable for xi for population and xi for the sample.

  • @mainclass6511
    @mainclass6511 3 роки тому +1

    Thank you so much...
    I am speaking from Bangladesh

  • @sumonsarker6613
    @sumonsarker6613 Рік тому +1

    very helpful and clear

    • @Stats4Everyone
      @Stats4Everyone  Рік тому

      Awesome! Happy to hear that this video was helpful!

  • @NN-br2xh
    @NN-br2xh 2 роки тому

    @5:21 why is the mean of all the Xi is equal to the same Mu?

    • @Stats4Everyone
      @Stats4Everyone  2 роки тому +1

      Good question. Thanks for the comment. All the Xi come from the same population, therefore they all have the same population mean, mu.

  • @dilloninmotion
    @dilloninmotion 5 місяців тому

    Super helpful, thank you.

  • @ammarsaati
    @ammarsaati 3 роки тому

    Great..very helpful explain

  • @MariahYedidiah
    @MariahYedidiah 3 місяці тому +1

    got it,thank you!!!!!!

  • @purvi9958
    @purvi9958 2 роки тому

    Thankyou so much...this cleared all my doubts.

  • @AlulaZagway
    @AlulaZagway Рік тому +1

    any proof for SDOM? I don't get it why doe have root(N) as denominator in the normal distribution SDOM

    • @Stats4Everyone
      @Stats4Everyone  Рік тому

      This video may be helpful: ua-cam.com/video/XymFs3eLDpQ/v-deo.html

  • @jamesbrown7885
    @jamesbrown7885 2 роки тому

    hey I have a question when u showed us how the expected value of sigma squared is biased estimator is called mathematical prove in econometrics right ?

  • @rivierasperduto7926
    @rivierasperduto7926 Рік тому

    at 12:44 mark should it not be sigma squared + mu squared = E(x sub i squared)

    • @Stats4Everyone
      @Stats4Everyone  Рік тому +1

      I noticed this mistake about 30 seconds later and corrected it in the video. Sorry for any confusion!!

    • @rivierasperduto7926
      @rivierasperduto7926 Рік тому +1

      I should have finished the video but I just did now. Thanks for clearing that up for me

  • @abhishek-u7c
    @abhishek-u7c 2 місяці тому

    4:42 why are we taking E( ) but not equating to "mu" please some one reply

    • @Stats4Everyone
      @Stats4Everyone  2 місяці тому +1

      The reason we are finding the expected value of the sample mean (i.e. E(Xbar) ) is because an unbiased estimator has the property that it's expected value equals the population parameter. In this case, by 5:47, we show that E(Xbar) = mu

  • @hitoshijun2600
    @hitoshijun2600 3 роки тому

    this is so easy to understand now. ty

  • @thomasdehee9626
    @thomasdehee9626 2 роки тому

    Very clear, thank you so much !

  • @tebogohappybasil7469
    @tebogohappybasil7469 2 роки тому

    This is very powerful 👏 🙌 👌💪

  • @lollipoppeii4707
    @lollipoppeii4707 3 роки тому

    what the heck, this is diamond.
    Thanks from Taiwan.

  • @francesco4382
    @francesco4382 2 роки тому

    good work

  • @sadiqurrahman2
    @sadiqurrahman2 3 роки тому

    More than excellent,

  • @merlin1339
    @merlin1339 2 роки тому

    Mam, I have a doubt at 12:18 , why we are taking sigma² for var(xi) instead of S²?

    • @shinshenghuang1941
      @shinshenghuang1941 2 роки тому

      I think is because sigma square itself is the symbol of variance and in the video, she was just explaining the definition of variance in order to do continue the calculations in the previous steps.

    • @shinshenghuang1941
      @shinshenghuang1941 2 роки тому

      That sigma square is just a symbol for the concept of “variance”.

  • @bernicemaina4282
    @bernicemaina4282 Рік тому +1

    ❤❤❤❤

  • @Miyelsh
    @Miyelsh 3 роки тому

    Great explanation!

  • @ingridvogt7252
    @ingridvogt7252 3 роки тому +1

    thank you so much!

  • @VictorSantos-yb8ir
    @VictorSantos-yb8ir 11 місяців тому

    Thank you very much

  • @LmaoDed-haha
    @LmaoDed-haha 10 місяців тому +1

    I dont understand why E(xi) = u at the first place? I mean Capital Xi denotes the units of population lets says it has N units. And small xi denotes units of sample , it has n units. So E(Xi) should be equal to u (population mean) but how we can say E(xi)=u ? Since xi is a just a small subset of population units that is Xi , by defination of sample.
    Help me.

    • @Stats4Everyone
      @Stats4Everyone  8 місяців тому

      Thanks for this comment! A sample is a subset of the population. Sorry for any confusing regarding notation... in this video, I do not use Capital Xi and lowercase xi, because I am referring to the same objects. For example, let us think about a small population. Suppose my population is the following set:
      {3, 5, 6, 2, 1, 7, 8, 10}
      the population average, mu, is 5.25. Also, the expected value for any member of this set is 5.25.
      mu = E(Xi) = 5.25
      Now, suppose I were to take a random sample of 3 objects from this population:
      {5, 1, 8}
      Here, the sample mean, Xbar, is 4.67. This sample mean is an estimate of the population mean. Though, the population mean is not changed by us taking this sample. It still holds true that mu = E(Xi) = 5.25.

  • @joaoneilopes
    @joaoneilopes 2 місяці тому +1

    absolute beautiful

  • @perischerono987
    @perischerono987 Рік тому

    In general when we use n-1 is biasdness or

    • @perischerono987
      @perischerono987 Рік тому

      Sorry...i meant n and not n-1

    • @Stats4Everyone
      @Stats4Everyone  Рік тому

      @@perischerono987Do you have an example in mind? We use n in the denominator for x-bar so that it is an unbiased estimator for the population mean, and use n-1 in the denominator of s^2 so that it is an unbiased estimator for the population variance. Every estimator needs it own proof for unbiasedness... In other words, in general, we need to show that
      E(estimator) = population parameter

  • @hannahdettling3112
    @hannahdettling3112 8 місяців тому +1

    Thanks for the video this helped me a lot. But in my course ists the other way around when you have 1/n its an unibiased estimator and when you have 1/n-1 its biased so now im lost again😂

    • @Stats4Everyone
      @Stats4Everyone  8 місяців тому +1

      In your course, if the estimator for sample variance? For example, if you are estimating a mean, the unbiased estimator would have n in the denomiator... Though for sample variance, the proof that I provide in this video is correct. Here is another source that might be helpful: en.wikipedia.org/wiki/Bias_of_an_estimator#:~:text=Sample%20variance,-Main%20article%3A%20Sample&text=Dividing%20instead%20by%20n%20%E2%88%92%201,results%20in%20a%20biased%20estimator.

  • @LyndaLiu
    @LyndaLiu 29 днів тому

    If X~N(10, 4), X bar = (X1+X2+X3)/3; what’s var (X bar - X3)? If I compute var (X bar) + Var (X3)=4/3+4=16/3. But if I simplify X bar - X3 to (X1+X2-2X3)/3, then the variance becomes (4+4+4*4)/9=8/3. How come they are different? Thanks for your help!

    • @Stats4Everyone
      @Stats4Everyone  28 днів тому

      Xbar and x3 are not independent. Therefore, the variance of xbar - x3 is not equal to the variance of xbar plus the variance of x3. Rather:
      Var(xbar - x3) = var(xbar) + var(x3) - 2cov(xbar, x3)
      Now… to find the cov(xbar, x3)…
      Cov(xbar, x3) = cov(1/n sum(xi) , x3) = 1/n sum (cov (xi, x3))
      Since xi is independent of x3 for all cases except when i = 3, the cov(xi, x3) = 0 except for when i =3. When i is 3, then cov(x3, x3) = var(x3) = 4. Therefore,
      Cov(xbar, x3) = 1/n * 4 = 4/3
      Now, we have:
      Var(xbar - x3) = var(xbar) + var(x3) - 2cov(xbar, x3) = 4/3 + 4 - 2*4/3 = 4/3 + 12/3 - 8/3 = 8/3
      Hopefully this helps. Thanks for the interesting problem!

  • @swaggy745
    @swaggy745 Рік тому

    if we are given a pdf of 4 values of x with their probabilities in terms of theta, then we find an estimator for the mean theta-hat and then we find the mean square error in terms of theta (should it be in terms of theta?), how can we find if it it mean square consistent. I am unsure because n=4 for my questions so I can't see how it makes sense to consider the limit as n goes to infinity. Please could someone shed some light. Thank you

  • @hiralvaghela6109
    @hiralvaghela6109 10 місяців тому +1

    perfect!

  • @asiimwemuhabuzimuhoozi3422
    @asiimwemuhabuzimuhoozi3422 3 роки тому

    Thank you❤

  • @ChakravarthyDSK
    @ChakravarthyDSK 3 роки тому

    can you talk about various other estimators !! the best thing is that you are fluent in the subject .. clap .. clap ..