What are degrees of freedom?!? Seriously.

Поділитися
Вставка
  • Опубліковано 29 чер 2024
  • See all my videos at www.zstatistics.com/videos/
    Ever wondered why lecturers often baulk at the idea of explaining degrees of freedom?? Well... it's a tough topic. But here it is. Presented succinctly in all of its delicious glory.
    You'll find that in understanding degrees of freedom, you actually are leaps ahead in understanding statistics itself.
    1:13 Introduction
    3:23 Degrees of Freedom Intuition (WATCH THIS BIT!)
    7:33 Standard deviation and descriptive statistics
    15:05 Regression
    20:28 Chi-squared goodness of fit test
    24:12 Chi-squared test for independence

КОМЕНТАРІ • 229

  • @bettybennett1955
    @bettybennett1955 2 роки тому +11

    Blessings. I’m a 79 yr old grad student. This stuff is rocking my self image

  • @jesusvelazquezdelatorre8060
    @jesusvelazquezdelatorre8060 3 роки тому +43

    I'm so impressed with how in depth and without choking you are capable to teach statistics. They are a pretty complex subject, and thanks to you we are getting to understand them, and who knows, maybe some of us even like them. THANKS!

    • @sylviahkamandani9816
      @sylviahkamandani9816 8 місяців тому

      😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😮😊😊😊😊😮😊😅😊😊😮😊😮😊😊😊😅😊
      P
      Please f

  • @AlexThess78
    @AlexThess78 5 років тому +41

    Hi! Your teaching style is exceptional! You really know how to approach the weaknesses of the students! Keep on the good work!

  • @Unused50
    @Unused50 4 роки тому +2

    Your pace of delivering is perfect. Can't thank you enough!

  • @SiddharthPrabhu1983
    @SiddharthPrabhu1983 5 років тому +38

    Beautifully explained! This is a topic that gets glossed over a lot in statistics courses and I really appreciate the amount of time you devoted to it.

  • @Ruostesieni
    @Ruostesieni Рік тому +3

    Just commented on your other video, but ended up watching this too by accident, when I was searching information on degrees of freedom. As a medical student doing research on biostatistic heavy subject, you truly are a lifesaver! Stuff like this really helps me to keep on going instead of needing to go through multiple statistic courses in uni on top of all my other studies and research project work!

  • @williamhepworth7097
    @williamhepworth7097 3 роки тому +5

    Thanks for putting these together man. I'm not a student; I'm just brushing up on my stats. Your explanations are spot on. Had I had a teacher like you, I wouldn't be brushing up on my stats now (and I had some great mathematics teachers).

  • @elvisfan6476
    @elvisfan6476 3 роки тому +1

    Probably one of the BEST instructor I have run into in my career or lifetime - most people cannot teach statistics - this gentleman is awesome!

  • @just_mudit
    @just_mudit 3 роки тому +7

    Big fan Justin, you really don't know how helpful and mind-blowing are these videos for like 'whole world'. I wish you health and happiness in these extraordinary times. Where are you from, as in country and city? I would be fortunate to meet you someday, you're a great guy!

  • @rsivamvrajoo6656
    @rsivamvrajoo6656 4 роки тому +1

    Detailed coverage , kudos .Finally What i have been looking for. Appreciate

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes 4 роки тому +6

    It is even cool to watch stats with you as a teacher! Bravo.

  • @varunsagartheegala
    @varunsagartheegala 3 роки тому

    Absolutely amazing explanation of degrees of freedom. You gave a very good, simple and easy example of the urchins through which what df is and how its calculated could be immediately understood. Great job.

  • @besfatdejen5744
    @besfatdejen5744 2 роки тому +1

    That is a very explanatory, cool sound, and step-by-step lecture. I really like it because it is very easy even for new beginners of statistics. Good job! many thanks.

  • @harrysmyth4540
    @harrysmyth4540 2 роки тому

    Thank you for actually explaining the meaning behind DF clearly before jumping in to any abstract analysis of numbers. It is frustrating trying to find clear content so thank you for that.

  • @geraldolson7364
    @geraldolson7364 2 роки тому

    This is a very helpful explanation on a topic that is all too easily glossed over, but I think it is essential to getting a firm grasp of what we are doing with statistics. Thank you for taking the time to post it.

  • @redmotherfive
    @redmotherfive 5 років тому +1

    Probably the best explanation about df on UA-cam, well done!

    • @ado22222
      @ado22222 3 роки тому +1

      yeah? but whats the explanation really?

  • @krystalA5379
    @krystalA5379 Рік тому

    Thank you so very much for this thorough and well delivered explanation of a complex concept that many educators try to breeze over. The type of explanation you provided is rare and I can't believe how smoothly and clearly you delivered your content. I am super impressed and very inspired as I tutor statistics to my fellow students. Thank you again. Liked, Subscribed and hit the bell 😊🙏

  • @story_teller_1987
    @story_teller_1987 3 роки тому +4

    Happy 2021...Thank you Justin for the immense effort you put into this video...Love from Kerala...🙂

  • @StudhouseK
    @StudhouseK 3 роки тому

    From 17:00, for the next minute: that's where it clicked for me, and I (somewhat) understood what degrees of freedom means. Thank you! Great breakdown.

  • @ruchisengar8940
    @ruchisengar8940 2 роки тому

    Superb explanation. This was stuck in my head. You just cleared the concepts. Grateful to you.

  • @muhammadalimoriyani9872
    @muhammadalimoriyani9872 Рік тому

    You are really an excellet teacher! I love the way you explain these concepts!

  • @cecilmcintosh864
    @cecilmcintosh864 Рік тому

    Liked the video as soon as i heard his introduction. Summed my feelings about the topic up Perfectly

  • @stephenday4834
    @stephenday4834 3 роки тому

    1:26 - Relief, I thought that music was going all the way though the video. Awesome video. Best explanation I've found so far.

  • @lakshyarao8780
    @lakshyarao8780 Рік тому +1

    so good. also his choice of words to explain concepts is really good

  • @vak1103
    @vak1103 5 років тому +1

    you are a life saver for stats students...

  • @brazhell
    @brazhell 2 місяці тому

    I'm greateful for your lectures, and can say that this specific topic was always somehow incomplete for me, until now! I'm studying calculus, and statistic is a challenge for me. Thank you, and be always healthy!

  • @trevorkafka7237
    @trevorkafka7237 3 роки тому +1

    This was very informative! I will be sharing this with my students.

  • @ichigokurosaki8211
    @ichigokurosaki8211 3 роки тому

    I have never seen anyone describe degrees of freedom so clearly, thanks!

  • @jamesrobertson9149
    @jamesrobertson9149 5 років тому +1

    I'm very glad I subscribed to this channel

  • @Muuip
    @Muuip 3 роки тому

    Great concise presentation!
    Much appreciated!👍

  • @elshaddaiharris5595
    @elshaddaiharris5595 2 місяці тому

    Loving your videos so far... Really helpful 🎉

  • @MDMAx
    @MDMAx 2 роки тому

    Very helpful. Thank you for helping me master this subject.

  • @olgadrobina1563
    @olgadrobina1563 Рік тому

    this explanation is simply fantastic, thank you so much!

  • @moza9835
    @moza9835 Рік тому

    Thank you so much sir. Please keep up the good work. I'm learning a lot.

  • @saurabh75prakash
    @saurabh75prakash 5 років тому

    Thanks and a very happy new year.

  • @akshaydayanandan7006
    @akshaydayanandan7006 3 роки тому

    This is like the 8th video am watching on this channel today !! Where had you been all this while !!!!?

  • @mok0802
    @mok0802 2 роки тому

    this is the best video i have found online to tell me the DF, it is the independent pieces of information that exists in a sample to predict the main population. if were to predict, we must know the minimal values of independent pieces of sample to do a prediction over the population, generally, the more df the more accurate the prediction from the sample

  • @rahulsachani
    @rahulsachani 2 роки тому

    Kudos to you Bud! Great Explanation!

  • @michaelcheng6128
    @michaelcheng6128 5 років тому

    It's so good!!!! It's the way of how statistics should be taught!

  • @matthamdorf4808
    @matthamdorf4808 2 роки тому

    The three-dimensional explanation of degrees of freedom in regression was really a light bulb moment. Awesome stuff.

  • @michaelsun7968
    @michaelsun7968 4 роки тому +1

    Super clear explanation!

  • @sadekshowkat1949
    @sadekshowkat1949 3 роки тому +1

    Awesome explanation! thanks!

  • @RohanMishra-mz1gl
    @RohanMishra-mz1gl 3 роки тому +25

    I wish you were my professor

    • @pk-uk5lc
      @pk-uk5lc 3 роки тому +1

      That's something most of us indians want -better education

    • @pk-uk5lc
      @pk-uk5lc 3 роки тому

      It's better if we adapt the Vedic methods😂😂

    • @somcana
      @somcana 2 роки тому

      He is your professor by choice.

    • @boburbekkhakimov2386
      @boburbekkhakimov2386 Рік тому

      Support the idea

    • @drpritamkrdas_official
      @drpritamkrdas_official 2 місяці тому

      ​@@pk-uk5lc Hey there, buddy! It ain't about being Indian or American, it's about the individual's ability to simplify things and make them more understandable. There are tons of examples of amazing Indian educators out there, not just on UA-cam.

  • @HUnatuurkunde
    @HUnatuurkunde 2 роки тому +1

    great explanation. Keep up your good work!

  • @daiversetech8000
    @daiversetech8000 3 роки тому

    Fascinating Work you are doing ... Keep it up Plz

  • @zamzam9031
    @zamzam9031 2 роки тому

    Wow...this is really awesome...you did in 30 mins what my lecturer couldnt do over the whole semester...LOL. THANK YOU!!

  • @RegularObamahedron
    @RegularObamahedron 3 роки тому

    Thank you! Intuitive explanations.

  • @user-ib7fn8kr9b
    @user-ib7fn8kr9b 3 роки тому

    the best stats explanation that I ever had!!!

  • @joeforshaw8362
    @joeforshaw8362 4 роки тому

    Outstanding. Thank you!

  • @angelamumbo2399
    @angelamumbo2399 2 роки тому

    Best explanation of Chi Square so far. Best use of my 27 minutes

  • @selamawitayalew5992
    @selamawitayalew5992 2 роки тому

    What great explanation! I thank you.

  • @rehabalsaleh166
    @rehabalsaleh166 3 роки тому

    Thanks a lot! This was really helpful! Thanks again sir!

  • @javiergonzalezarmas8250
    @javiergonzalezarmas8250 Рік тому

    Incredible explanation!

  • @nikkihuang7635
    @nikkihuang7635 2 роки тому

    Great explanation, thank you!

  • @maftoumiali4412
    @maftoumiali4412 4 роки тому

    You're really good, thank you

  • @yr4865
    @yr4865 Рік тому

    Clearly explained, excellent.

  • @Napoleon4778
    @Napoleon4778 4 роки тому +1

    Quite clearly explained.

  • @ricardoafonso7563
    @ricardoafonso7563 3 роки тому

    .
    Is that you ?
    .
    Nice to see a face to a name
    .
    Been watching a few of your videos
    .
    Thank you
    .
    Hope to use the skills into my retirement... touching 60
    .

  • @asknorway
    @asknorway 4 роки тому

    Liked and shared. Great content!

  • @malichicu
    @malichicu 4 місяці тому

    Brilliantly explained

  • @sangaviloganathan5194
    @sangaviloganathan5194 4 роки тому +5

    Amazing content.
    But, can you say, as to why in 7.03, you mean to say that there is only 5 df for mean 4 df for std? I am a newbie to stats

  • @charlottebartlett1212
    @charlottebartlett1212 4 роки тому +7

    Loved the line about dividing by zero, "mathematically speaking, an explosion." Made me laugh of loud.

    • @gregoryharlston0602
      @gregoryharlston0602 3 роки тому +3

      I noticed that too but it didn’t cause me to lol...just a chuckle.😏

  • @nomadicadventure4624
    @nomadicadventure4624 3 роки тому

    sir i would like to meet u some day ....you really clarified one of my biggest doubts

  • @abu-bakrmohamed1707
    @abu-bakrmohamed1707 Рік тому

    Seriously, u are the greatest, I love u man 🤍🤍

  • @kjweng4464
    @kjweng4464 4 роки тому +2

    Could you please explain why we use degrees of freedom to adjust the difference between sample statistics and population parameters? What does that have to do with "independent pieces of information"?

  • @just_a_viewer5
    @just_a_viewer5 Рік тому

    amazing video! thank you!!

  • @taxsi
    @taxsi 2 роки тому

    if you know maths and desrciptive statistcis already, excep degrees of freedom and use of (n-1) instead of n, the critical explanation for you starts at 13.11 and ends at 15th minute but it's kind of explained away without a real explanation. let me look for the sections about regression etc.

  • @manuelelkess8259
    @manuelelkess8259 10 місяців тому

    great video acutally for the first time I can say I understand DFs

  • @sanjaykrish8719
    @sanjaykrish8719 4 роки тому

    WOw.... Amazing amazing. So far I just took it as a rule of thumb. Now it all makes sense.

  • @sheikhusman3
    @sheikhusman3 5 років тому +62

    you changed n to n-1 without any mathematical proof. any other proofs? if we want to inflate the estimate, why not make it n-2? i am assuming there is some robust mathematical reasoning.

    • @niemand262
      @niemand262 5 років тому +17

      We use df primarily when estimating variances, because we know that dividing by n underestimates population variance. To my knowledge, ther is no formal mathematical proof to show that n-1 is necessarily "correct". We can never know it is 'correct", because in the real world we never know the true population values.
      That said, statisticians have demonstrated the concept using toy data sets for which the population values are defined. With these models, n-1 reliably made the variance estimates better. If n-1 had been better at the job, they would have chosen it.
      It's the mathematical equivalent if having thermometers that reliably read 10% colder than reality. We are simply compensating for a known bias in the tool.

    • @jaydeepbabar6042
      @jaydeepbabar6042 4 роки тому +2

      The no. of Degrees of freedom of sum of squares = no. of independent variables in that sum of squares.
      Let SS= sum of (yi-ybar )^2
      ,i=1,2..n
      here SS is sum of square of
      n elements
      (y1-ybar),(y2-ybar)....(yn-ybar).
      These elements are not all independent bcz
      sum(yi-ybar)=0 (which condition for dependence of variable).
      So we use ' n-1 ' degrees of freedom for SS insted of n.

    • @jaydeepbabar6042
      @jaydeepbabar6042 4 роки тому

      please correct me,if am wrong

    • @henrysorsky
      @henrysorsky 4 роки тому +12

      @@niemand262 there are mathematical proofs as to what gives the best estimate on average (i.e. an unbiased estimate). In terms of standard deviations, its called "Bessel's correction" and there is a proof as to why we use n-1. As for using n-p, i.e. some other number of degrees of freedom, I THINK these are calculated by seeing how many of the data points are free to vary and still give us the same statistic. For example, if we calculate the mean of [x1, x2, x3], if we vary any of them, we can just move another one so that the mean stays the same. As we can move any of them, we have n=3 degrees of freedom. If we are estimating population variance from a sample without knowing the population mean, we are solving 2 equations (one for mean and one for variance) with n unknowns. As such, we can "replace" one of the n data points in the equation for standard deviation with some function of the sample mean whilst still technically expressing the standard deviation in the same way. As we can do this replacement of 1 of the data points with one of our statistics, we have n-1 degrees of freedom. This could be slightly wrong (I came to this video hoping for a full mathematical explanation) but I'm fairly sure its the gist of it.

    • @coastalbrake8886
      @coastalbrake8886 4 роки тому +1

      @@henrysorsky thanks for pointing out "Bessel's correction". Your intuitive explanation for degrees of freedom makes sense, but why doesn't the same intuition also apply for the standard deviation of the population? After all, given a set of N observations, if you know the standard deviations of n-1 points around the mean, you know the value of the Nth standard deviation.

  • @schizoframia4874
    @schizoframia4874 6 місяців тому +1

    This is very helpful 😃

  • @thepragmaticway8353
    @thepragmaticway8353 3 роки тому +3

    So, I have a question. What if the Population mean, bz chance happens to lie right on the the place where our sample mean is calculated. Then wouldnt we be unnecessarily inflating the variance? Might be a silly question but I am understanding the concept so well that just wanted to ask. :)

  • @yoon8158
    @yoon8158 Рік тому

    23:53 Ooh, that's a dangerous assumption in 2022 haha. Thanks for the great lecture!

  • @gregoryharlston0602
    @gregoryharlston0602 3 роки тому +1

    What softwares are you using to make these excellent videos? They are amazingly crisp and clean! Under the current pandemic and wanting to create better videos, I would love to know what tools you use. If you could share, that would be great! Thank you for making these videos!

    • @gazzzada
      @gazzzada 3 роки тому

      obvious that it's Prezi software

    • @gregoryharlston0602
      @gregoryharlston0602 3 роки тому

      @@gazzzada Is it? I have used Prezi and wouldn't have guessed that's what was used to create this video. Thanks.

    • @gazzzada
      @gazzzada 3 роки тому

      current version it gives even more options

  • @MathManMcGreal
    @MathManMcGreal 5 років тому

    Love it!

  • @elbroopymuchy5447
    @elbroopymuchy5447 Рік тому

    what a legend! thank you!

  • @dseven8756
    @dseven8756 2 роки тому

    really great video

  • @Ycaru5
    @Ycaru5 3 роки тому +1

    hi justin, nice and useful video you got here. I'd like to request if you could make one about bias, and what does it mean when you say unbiased estimate. thank you.

    • @zedstatistics
      @zedstatistics  3 роки тому +1

      Oooo! I like this idea. Might even do a series on bias. Though I'm behind on other series at the moment so STAY TUNED :)

  • @karannchew2534
    @karannchew2534 3 роки тому

    17:13 Only with the third observation that we have a "degree of freedom" such that the regression line can cut through the points, to get the errors and the parameters.

  • @811madel
    @811madel 3 роки тому

    Nice & illustrative
    May I ask what software do you use to produce such attractive and informative video ?

  • @nguyenhuudung9847
    @nguyenhuudung9847 7 місяців тому

    just amazing...

  • @redmotherfive
    @redmotherfive 4 роки тому

    I also like to think of n-1 as reminding us that we just have xbar and hence only 1 piece of information.

  • @nadavshoua9671
    @nadavshoua9671 2 роки тому

    First time i understand why it’s n-k-1. Thanks!

  • @CivilModules
    @CivilModules 3 роки тому +2

    9:10 you said standard deviation is undefined but mathematically both numerator and denominator are zero so why it is still undefined?

    • @zedstatistics
      @zedstatistics  3 роки тому +2

      0/0 = undefined, not zero, believe it or not!

  • @cosibaakpabot7635
    @cosibaakpabot7635 2 роки тому

    Fantastic!!!

  • @user-rt3cd6iw1f
    @user-rt3cd6iw1f 3 роки тому

    Hi, thanks for nice explanation!
    I have a question about calculating degrees of freedom in chi-squared test. In population genetic study, the degrees of freedom is calculated as "the number of categories - the number of parameters" when we do chi-squared test for testing Hardy-Weinberg equilibrium. For example, if there are 6 genotypes(AA, BB, CC, AB, BC, AC), the number of categories(genotypes) is 6, and the number of parameters(alleles) is 3(A,B,C). So the degrees of freedom is 3. Do you know why is this?
    Additional description : Sum of allele frequencies is 1. And the expected genotypes are calculated from product of allele frequencies.
    If number of observed genotypes are 30,40,30 for AA, AB, BB respectively, allele frequencies are 0.5, 0.5 for A, B respectively. So the expected genotype will be 25(0.5*0.5*100), 50(2*0.5*0.5*100), 25(0.5*0.5*100) for AA, AB, BB.

  • @Dr_Finbar
    @Dr_Finbar 3 роки тому +2

    Your videos are so useful, thank you so much! One thing I can't get my head around here though. So, we divide by n-1 (as opposed to n) to account for the variance needing to be larger as our sample mean is just an approximation of the population mean and the variance of the population mean is as small as it can be. But, we don't know the population mean so our sample mean could be the same as the population mean and thus we would be over estimating the variance by dividing by n-1 and not n. Is this true?

    • @MDMAx
      @MDMAx 2 роки тому

      Overestimation can occur only when n, sample size is greater than population size, which by definition is impossible. You can't gather more units to measure than absolute best case scenario, assuming you have available a 100% of information. For example, you can't run a questionnaire through all 8 billion people. And even if you could, you can't hope they all will have answered 100% honestly without biases. If you'd divide your result by 8 billion and 1, your result by default will have a smaller value than the true value. Dividing by a larger value gives you a smaller value. Anything we measure is less than 8 billion people, hence, all of our results will be larger or smaller than the true value. If we wouldn't have measured 1 person out of a total population, that 1 person would be the final measurement before the sample mean becomes a population mean. That one last piece of information increases or decreases the sample mean into a population mean.
      We divide by n-1, or n-k-1, or (c-1)(r-1) to have our guess be as close to the actual population as possible. We know that any guess we make is not spot on a population guess. Look at what happens with a variance in a perfect population (x-mu)/n with random numbers i.e. mean at zero:
      (3-0) + (2-0) + (1-0) + (0 - 0) + (-1-0) + (2-0) + (-3-0) = 3 + 2 + 1 + 0 - 1 - 2 - 3 = 0
      All of the values cancel out. We square them to get rid of the negative sign and have any kind of usable metric to assess reality. We probably could have use |absoulte bars| with the same result.
      When you pass 0, i.e. our sample mean passes the population mean.
      Statistics under/overestimates the true mean mathematically by default.
      Once your guess is higher than the mean, the now squared distance starts to grow again.
      9 + 4 + 1 + 0 +1 + 4 + 9
      It does not collapse on itself. Graphically it's a parabola and you've estimated the guess at its bottom point.
      I guess what you asking is, will we overestimate the variance when our guess size matches perfectly the true population size and no better guess can be made?
      Firstly you should never assume you can guess the true population value. We have to live in a real life where nothing is perfect. Statistics is not perfect. It simply strives for perfection. Improbable to guess any single continuous value precisely. What is the chance that a next person has a height of 173.48763498456437654405 cm? Impossible to pick a sample this precise.
      For a sample of n=1 you'd have a undefined sample variance, because in a case of n-1 variance is divided by zero. How far away is a sample from itself? No distance. It is the mean value of itself. Statistics doesn't deal with absolutes and cannot PROVE anything IS because we never know the population at any given moment. Something is always left unaccounted for. If n=0, answer would be negative. Negative value means no sample has been measured.
      Imagine an equation without the -1 part i.e. a sample of me n=1 writing an an amount of this comment x=1. The average of these two would be 1. The variance (1-1)/1 = 0. No spread from itself. It's nonexistent either way, either you say it's = 0, or infinity, because you divided the spread by 0.
      Your alternative hypothesis defined by the sample size IS the null hypothesis.
      You've proven the alternative hypothesis is the null hypothesis.
      Null hypothesis is not rejected within the significance level of 1, we can have an underestimation or overestimation of the true population mean
      Having written all this, i still don't think i have answered the question, why an overestimation happens as sample mean becomes the population mean.

  • @joshidhruvp.64
    @joshidhruvp.64 4 роки тому +1

    nice sir. thanks

  • @romanvasiura6705
    @romanvasiura6705 Рік тому

    Thank you!

  • @bebelstead
    @bebelstead Рік тому

    Great video, as always. I could not find anything specifically about F-distribution, is it in the pipeline? Thank youy

  • @galenseilis5971
    @galenseilis5971 2 роки тому

    I would be interested in your perspective on how degrees of freedom should be considered for nuisance parameters.

  • @tnuts92
    @tnuts92 2 роки тому

    Thanks to you!

  • @MRJT2006
    @MRJT2006 3 роки тому

    @14:25 OMG, all this time, I took DoF as an abstract peculiarity of the equation.
    So the Sample Mean is just an estimate of the Population Mean. Reducing n by 1, you decreasing denominator and therefore inflating the variance to get a better estimate of the Population Variance, thereby including sampling error. Even though, I still don't know why you will use 2,3,4 DoF, I feel must more relieved now that this major stumbling block is removed. Thank you for breaking it down!

  • @rxsousa
    @rxsousa 2 роки тому

    GREAT!, thanks a lot

  • @sanathgunawardena832
    @sanathgunawardena832 Рік тому

    Brilliant!

  • @gautamhathiwala7267
    @gautamhathiwala7267 2 роки тому

    @zedstatistics
    Dear Sir, when you explained about the n-1 in the denominator of S.D., it was more of an empirical observation. I would like to know if we have a mathematical deduction of this formula.
    Thank you

  • @hayajohn5206
    @hayajohn5206 3 роки тому

    good explanation

  • @user-bz8nm6eb6g
    @user-bz8nm6eb6g 3 роки тому

    Thanks!

  • @yulinliu850
    @yulinliu850 5 років тому +3

    Happy 2019!

  • @richarddow8967
    @richarddow8967 Рік тому

    The reason n-1 is used in calculating s^2 is because n-1 is unbiased estimator of sigma^2 or the variance. You did some wishy-washy hand waiving. If you don't want to work through the math, just say it can be shown that if n is used to calculate s^2, the bias is ((n-1)/n)*sigma^2 so by using n-1 you instead end up (n-1)/n-1) all times sigma^2. which is unbiased