Chi-Square - Post-Hoc Testing - SPSS (Part 3)

Поділитися
Вставка
  • Опубліковано 31 гру 2024

КОМЕНТАРІ •

  • @dsavkay
    @dsavkay 10 місяців тому

    Best video about this I've ever seen, thank you, congratulations 💯

  • @karatsukerman3717
    @karatsukerman3717 4 роки тому

    I have watched tons of stats videos on youtube and this has possibly been the most helpful and easy to understand yet, thank you!

  • @helendecruz4517
    @helendecruz4517 10 років тому +2

    Thank you this was very helpful, and definitely more sophisticated and elegant than transforming my data into a set of 2x2 tables

  • @polaroidblonde
    @polaroidblonde 6 років тому +1

    Super helpful! Thank you! At first it seemed intimidating, but you broke it down in steps and it was great!

  • @Acroni2011
    @Acroni2011 4 роки тому +5

    Thank you so much for these valuable information. But, I didn't get why did you applied df=1 for this example?

  • @alizapanjwani
    @alizapanjwani 9 років тому +7

    Hi there, great video. Spent so long researching chi square post-hoc analyses before finding something that seemed doable (also thanks for the reference at the end!). In the actual results section of the paper, how would you report/what would you report about these post-hoc? How are they framed? Is there any way you could provide suggestion in terms of wording? Thanks (in advance)!

  • @kristinrichie3718
    @kristinrichie3718 8 років тому +20

    Thanks for this! So helpful...one question though: are the degrees of freedom always equal to 1 when doing this adjustment? Or, how did you come up with 1 df? Thanks.

    • @how2stats
      @how2stats  8 років тому +16

      Yes, the df is 1 for each residual analysis. I didn't come up with this analysis. It is based on McDonald and Gardner, as described in the video.

    • @kristinrichie3718
      @kristinrichie3718 8 років тому

      Thanks!

    • @atikhomthienthong9310
      @atikhomthienthong9310 5 років тому +1

      @@kristinrichie3718 So it's always 1. Am I right?

  • @Israelis4ever
    @Israelis4ever 8 років тому +6

    The post-hoc test you use computes p values for each and every cell, and so takes into acount both the 'major' variable as the 'ses' variable. I am wondering what kind of post hoc test I can use after the chi-square that can say there is a difference between the major groups as a whole, for example between Science and Arts there is a significant difference in the 'ses'. Similar to the Tukey-type pairwise comparisons that is done after Anova?

  • @sameeramokkarala32
    @sameeramokkarala32 6 років тому

    This honestly saved my sanity today, thank you so much.

    • @milky6903
      @milky6903 3 роки тому

      hahahahahahahhh good luck!!

  • @adamblockwood
    @adamblockwood 4 роки тому

    So awesome. You walked through the process in an easy to understand fashion. Also, thank you so much for providing a reference for the methodology! If I could wish for one more thing, it would be for an example of a write up of the results with a table (or to be pointed towards an article that can be used as a template).

  • @ertugrulsahn
    @ertugrulsahn 10 років тому +10

    How to interpret and report the results please add a video for this.I did not understand that part Also, Thanks for this helpful video

  • @yolandeschuur9931
    @yolandeschuur9931 5 років тому +10

    Thanks for this clear instruction! I have one question: how do I report the outcomes of this post-hoc analysis in APA-style?

  • @suzieliu784
    @suzieliu784 8 років тому +3

    Thanks for the video! It helps! Here is a question: how to interpret the results? If the p value of one cell is less than 0.05, can I say this cell is significantly different from others?

  • @isnaenifajar5602
    @isnaenifajar5602 6 років тому +2

    Hi, thanks for the video! I have read MacDonald & Gardner (2000), and if I am not mistaken, the adjusted residual is used for normally distributed data. However, what if my data is not normally distributed? Should I use this method or Is there any other method to get post hoc analysis for not normally distributed data? Thanks!

  • @sietskevanbentum1104
    @sietskevanbentum1104 8 років тому +3

    Hi there, thanks so much for sharing your knowledge! :) I have a question regarding significant calculated p-values which correspond to an adjusted z-value lower than -1.96 . Should I take these into account as well? Is there any relation between this z-value and if the observed frequency (or % of cases falling into a category) is above or below the expected frequency? I.e. a z-value > 1.96 with a significant p-value relating to a frequency higher than expected, and a z-value < -1.96 with a significant p-value relating to a frequency lower than expected. Hope that makes sense...! Thanks for answering!

  • @mairabperotto4032
    @mairabperotto4032 8 років тому +6

    Hello and thank you for the very helpful videos. I have a question about how to specify the degrees of freedom when calculating p-values from chi-square values (2:50 min). In this example, the value entered was 1 but I would like to please know how to calculate this for other cases. Thank you.

    • @how2stats
      @how2stats  8 років тому +10

      +Maira BPerotto As per Macdonald and Gardner (2000), all of the adjusted standardized residuals are tested against 1 df.
      MacDonald, P. L., & Gardner, R. C. (2000). Type I error rate comparisons of post hoc procedures for I j Chi-Square tables. Educational and Psychological Measurement, 60(5), 735-754.

    • @mairabperotto4032
      @mairabperotto4032 8 років тому +1

      +how2stats Thank you very much for your response.

    • @milky6903
      @milky6903 3 роки тому

      ​@@how2stats Thank you very much! I am looking for this answer for hours...

  • @sedward1261
    @sedward1261 4 роки тому

    This presentation is brilliant. Thanks so much for sharing.

  •  4 роки тому

    Congrats!. You really helped me with my contingency table!

  • @leighmoyboy
    @leighmoyboy 9 років тому +4

    Hi, thank very much for the really helpful video. After doing the post-hoc tests myself I have values which appear to have significant Z scores (i.e are greater than 1.96) but that are not significant when comparing to the Bonferroni adjusted p value. How do I interpret this? Does this mean my Chi-squared test is wrong in suggesting there is a significant deviation from the expected values? Not really sure what is statistically significant and what isn't!

    • @minaorang5094
      @minaorang5094 4 роки тому +1

      The same thing happened to my data! No idea how to interpret it!

  • @jessehowell102
    @jessehowell102 4 роки тому +1

    Great video, but I am still unsure as to which paired comparison the significance value of the adjusted residual is connected to. For example, in the 'Upper SES' and 'Science' cell, it is statistically significant (-4.32 > -1.96), but what is it different from? Is it different from the cells within the 'Science' row, or is it different from the cells in the 'Upper SES' column?

    • @libakabou9788
      @libakabou9788 Рік тому

      This is the real question that is still unanswered after all these analyses. A post-hoc is a pairwise comparison between two categories. With this approach, how do we know the difference between the categories?

  • @sarahfarran
    @sarahfarran 9 років тому +1

    You're a life-saver, thanks a lot! One question though, can you teach us how to interpret and report the results? Thank you!

  • @dmitarvlahovic4633
    @dmitarvlahovic4633 9 років тому +1

    Thank you very much for this explanation. I was wondering if you could tell me how to report results of this post-hoc test in a paper. I am confused, if I am meant to report just the Pearson Chi-Square and to mention in text which cells significantly contribute to do whole model, or there is another way?

  • @MdAbulKalamAzadBD
    @MdAbulKalamAzadBD 10 років тому

    Wow...real teaching skills...Thanks you Sir

  • @recepbalk2185
    @recepbalk2185 7 років тому +1

    Great video.
    May I ask you 2 questions
    1 what is the difference between standardized, adjusted, unstandardized residuals
    2 can we just use "z tests compare columns porpotions adjusted p values" option in cells tab
    thanks

  • @etinmorel
    @etinmorel 10 років тому +3

    very well explained - thank you! Can you tell me, why you use df=1 (minute 3:00). I would very much appreciate.

    • @how2stats
      @how2stats  10 років тому

      It's a statistical test with only 1 degree of freedom.

    • @jasonmarb1
      @jasonmarb1 9 років тому

      ***** yeah... why do you only use 1 df when the Chi Square is df=3 ? :)

    • @how2stats
      @how2stats  9 років тому +7

      Jason Már Bergsteinsson Think of it this way: the overall r * c Pearson chi-square analysis is an omnibus test with (r-1)(c-1) degrees of freedom. However, the follow-up post-hoc tests are based strictly on the difference between two values, hence 1 df.

    • @jasonmarb1
      @jasonmarb1 9 років тому +1

      how2stats Thanks a lot for a great video and a quick response :)

    • @annoyingswht
      @annoyingswht 9 років тому

      +how2stats sorry, could you further explain to me what the "two values" are? I'm still a little confused about the df=1. Thank you!

  • @kekaki92
    @kekaki92 8 років тому +6

    I have a problem with interpretation: I was comparing Ethnic categories (8 categories) x completion of a survey (completed, non completed). After following your technique I found that Asian and White had a significant corrected-p value. From the percentages I can see that Asians were more likely to complete the survey than to not complete it and White had the opposite trend.
    How do I report these results? Do I say "Asians were more likely to complete the survey" but compared to WHICH category? Does this technique you apply in the video is telling me Asian vs all other 7 ethnicities together? Or Asian vs all other separately? Thanks in advance.

    • @danishhussain7127
      @danishhussain7127 7 років тому +1

      Hey, I am also struggling to interpret the findings of a similar analysis of 9X2 variables. did you find some way to interpret it ?

    • @amaalalrehaili4564
      @amaalalrehaili4564 6 років тому +1

      same problem I had really big tables struggling to report my findings please let me if you find anything useful

    • @nikichen
      @nikichen 5 років тому +1

      Hi Katerina, since this is chi-square test of independence it tests whether or not the two factors are associated with each other. In this case the null hypothesis is there is no association between ethnicity and completion of the survey. It does this by comparing the observed value the proportion of e.g. White people who completed the survey with what the EXPECTED value is, given that ethnicity is not associated with survey completion. So the answer is that Asians completed the survey MORE than what we would expect to see if the null hypothesis was true. Hope that helps.

  • @mehmetsonsoz1925
    @mehmetsonsoz1925 4 роки тому

    Fantastic video, well done!!

  • @TerielAtmano
    @TerielAtmano 8 років тому +1

    The alpha value was divided by 12 because that's how many adj. std. residuals there are. Should we also include the significance value of chi-squared itself, so that it's actually 13? If not, then what if I do 10 separate chi-squared tests, and only 1 is significant? I look at adj. std. residuals for that one, let's say there are 4. By how much do I divide my alpha? I thought 14 (10 tests + 4 residuals). Is it actually just 4?

  • @danielhowell6909
    @danielhowell6909 2 роки тому

    Great video! How do you report the post hoc results? Please help!

  • @APSUNSH1NE
    @APSUNSH1NE 5 років тому +1

    Hi! Can you please give an example at how you would write this up?

  • @ninjanj6148
    @ninjanj6148 2 роки тому

    Thank you very much for this video!
    In this case, is the cell with the adjusted z value of -2.26 significant, although the calculated p-value is 0.02, which is higher than 0.003 (p=0.05 divided by 12)?

  • @katibsareeh9056
    @katibsareeh9056 Рік тому

    Hi. I performed Fisher's exact test on a 4*2 table in SPSS, and I got a significant difference (P= 0.010) and I wonder what is the post hoc test to use following that? is it the adjusted standardized residuals?
    and if I want to calculate the P value from the adjusted standardized residuals, what degree of freedom to use? To my knowledge, the degree of freedom is calculated as follows: (rows number - 1) multiplied by (columns number - 1), which means in this example a df= 3, however, in your example you used a df= 1 regardless the number of rows and columns.
    Thanks in advance for any help.

  • @Doll84174
    @Doll84174 9 років тому +1

    thank you for explaining it :-) is there a book where i can find this procedure so i can quote it?

  • @vinaykumarhallur7887
    @vinaykumarhallur7887 4 роки тому

    When I did the above, the initially the values of Chi-square were significant in the contingency table but after following the above procedure the p values are more than the bonferroni corrected p values. Is this possible?

  • @psaraff85
    @psaraff85 9 років тому

    Thank you so much! This was very helpful! Very clear explanation.

  • @MyLe-nl2my
    @MyLe-nl2my 4 роки тому

    Thank you for your 3-helpful video ^^
    After 3 videos, I'll write my conclusion from my understand. Hope you fix it if I get wrong.
    Firstly, run chi square test for homogeneity with null hypothesis: proportion of ses is equal to major,
    H0: p11=p12=p13 (1)
    p21=p22=p23 (2)
    p31=p32=p33 (3)
    p41=p42=p43 (4)
    H1: at least one proportion (1), (2), (3), (4) isn't equal
    Alpha < 0.05, so reject null hypothesis
    Secondly, chi square post-hoc with adjusted residual standardized residuals
    With adjusted alpha based on bonferroni correction= 0.05/12= 0.00416, it shows 3 proportions are different (p11, p41, p43)
    can we conclude: p11# p12=p13, p41#p42#p43?
    I still confuse how to interpret the results by words. Hope you help me. Thank you so much

  • @IOANNISZIAMPIRIS
    @IOANNISZIAMPIRIS 10 років тому +4

    Hi!! I could not start to thank you enough for all your help and guidance. Two long years of stats tuition at university and I couldn't get even close to what I've learned from your videos. Could you please answer three questions? (1) How did you get the z scores into the spss column? is it by hand? I've got lots and lots of analyses and it would take so long to do it by hand. (2) is it really necessary to do the post hocs? (3) What if we have lots of crosstabs tables, do we adjust the p-value every time by dividing with the number of tests conducted?
    Thanks in advance

  • @ingeborgsimpelaere955
    @ingeborgsimpelaere955 3 роки тому

    Dear, thank you very much for this interesting video. I just have one question. When I tried to multiply the adjusted z-scores via Transform, compute variable to obtain the chi-square. Then, the following note appeared: 'One of the operands for an arithmetic operation is other than a numeric variable or numeric expression'. How can I solve that? Thank you very much in advance!

  • @Ag8MrE
    @Ag8MrE 8 років тому

    This was great, but I have one minor question.Once we know our p-values associated with each cell, and we know the Bonferroni corrected value to compare that to, is there some way of calculating a new p-value from those 2, where p

    • @how2stats
      @how2stats  8 років тому

      +Ag8MrE I'm not sure I understand your question entirely, however, one option is to simply multiply the observed p-values reported by SPSS by the number of comparisons made. That's another way to perform the Bonferroni correction. Those multiplied p-values that are less than .05 would be considered statistically significant.

  • @iphoneiphone4700
    @iphoneiphone4700 2 роки тому

    So how do we compare the means? For example in ANOVA, we separate the means with letters using a post hoc test to see which is significantly different than the other if a difference exist. How is this done here? You keep mentioning “ this value is significantly different “… but significantly different than what?

  • @emelinepasque9631
    @emelinepasque9631 4 роки тому

    I have a message error because I have commas to seperate the decimals. How do I change the commas into dots?

  • @scottyoung6194
    @scottyoung6194 4 роки тому

    Fantastic video - thanks! But one question - does the Bonferroni adjustment mean the social science x low ses (ar = -2.09; p = 0.03662) and arts x low ses (ar = -2.26; p = 0.02382) values (for which the adjusted residual values indicate significance because they're less than -1.96) aren't actually significant because the p-values are greater than 0.0042? Or is this just saying that I can call these interactions significant based on the adjusted residuals but that I may be committing a type I error in doing so?

    • @craze4fashion
      @craze4fashion 3 роки тому

      hey, did you get an answer for this?

    • @scottyoung7266
      @scottyoung7266 3 роки тому

      I didn't, actually. Honestly, I forgot I even asked the question. : )

  • @anthonyross9512
    @anthonyross9512 7 років тому

    i understand why you divided by 12 to create the adjusted p value however what if you have several groups, as i have 81 individual boxes so do i then divide by 81 or by individually 9?

  • @davstm3291
    @davstm3291 7 років тому +2

    Hi, it was very helpful...
    can you please add another video for reporting your results (or use an another way)
    thanks anyway .....

  • @andreibarbulescu7812
    @andreibarbulescu7812 9 років тому +2

    shouldn't the expected values in each cell be [ / ]* ? rather than being an equal proportion for each row (say 3 rows >> 33% of cell total in each row)

    • @rlew1
      @rlew1 8 років тому +1

      +Andrei Barbulescu +how2stats
      That's correct, and a slip by the poster. In this context, the expected fraction would 29.3% (under the null hypothesis of independence), as given in the bottom row of the table.

  • @matthewirons5933
    @matthewirons5933 8 років тому

    That was brilliant. Thank you.

  • @mikkelsejersen2226
    @mikkelsejersen2226 9 років тому

    This was great. Thank you very much.

  • @petitcaillou814
    @petitcaillou814 5 років тому

    Thanks a lot for this ! I just have one more question, if I do this with a fisher exact test, how would you transform your Z-score in a fisher exact value ?

    • @how2stats
      @how2stats  5 років тому

      I'd say it's impossible, as Fisher's exact test is not based on the z-distribution.

    • @petitcaillou814
      @petitcaillou814 5 років тому

      how2stats Then the only way is to post hoc with 2*2 crosstabs ? Thanks for the quick answer !

    • @how2stats
      @how2stats  5 років тому

      That's an option many people would accept. It's called 'partitioning', as I write in my textbook (Chapter 4): www.how2statsbook.com/p/chapters.html The problem with partitioning is that the expected frequencies change, sometimes substantially, for each different 2x2 that you do, and also in comparison to the omnibus crosstabs analysis. Why bother with Fisher's exact test? Pearson chi-square usually works well, and you could then get the residual z-values for each cell in the crosstabs.

  • @zarrinkhatibi74
    @zarrinkhatibi74 4 роки тому

    Hello, thank you for the extremely useful video. My question is that why df equals to 1 in the function of SIGNIFICANCE(ChiSQ, df)?

    • @how2stats
      @how2stats  4 роки тому

      DF = 1 is fine. No problem.

    • @erickwesleyhedima2078
      @erickwesleyhedima2078 4 роки тому

      @@how2stats but we used we divided 0.05 by 12. i thought we will continue using 12.

  • @nsalter29
    @nsalter29 9 років тому

    Thank you very much for this video - this is very helpful. I have a question. First, am I interpreting the "significantly different" correctly - if the adjusted residual is above 1.96 (or has a p less than .05 after creating the p values as you did), then you can say that particular cell is "significantly different than the expected value if there was an even distribution across each cell." In other words - there are 4 income levels, so if there was an even distribution, you'd expect each cell to be at 25%. If the adjusted residual is 4.3 (for example), and the percentage of that cell is 50% (making numbers up here), then you could say "that cell is significantly different/higher than expected." Am I correct in this interpretation?
    Second: my question. Is it possible to conduct post-hoc tests to see if the two variables are significantly different from each other? For instance, in your example, you found a significant chi-squared when looking at 3 majors and 4 SESs. Is it possible to post-hoc test to see which of the 3 majors are significantly different from each other? For example, can you post-hoc test to see if Science is significantly different from Art? Is it possible to test if upper class is different than lower class? I am not sure if this is possible because these questions wouldn't just involve comparing two numbers - it would involve comparing a trend of numbers - so I wouldn't know exactly what to compare. Perhaps this just requires a bunch of 2X2 chi-squared comparisons . . . (with correction for Type I error, of course!).
    My question comes from: I'm writing a manuscript that found a significant chi-squared between two nominal variables (3 levels to each variable) and I suspect a reviewer will ask "but *where* are the differences, exactly." But I'm not sure it's possible to figure this out statistically. Any help would be appreciated - and I apologize in advance if I'm not articulating my question exactly! Thank you.

    • @how2stats
      @how2stats  9 років тому

      +Nick Salter Your interpretation is correct. In relation to your second question, yes, you could follow up with a series of smaller chi-square analyses, if you wanted to test the difference between particular percentages.

    • @nsalter29
      @nsalter29 9 років тому

      +how2stats I thought so! This is helpful - thanks so much!

  • @pazesantos
    @pazesantos 6 років тому

    Great work, thank you

  • @anthonyross9512
    @anthonyross9512 7 років тому

    Hi, how do you avoid type 1 error in this test?

  • @SudheendraRao26
    @SudheendraRao26 10 років тому

    very useful. thank you for sharing.

  • @lisachen4087
    @lisachen4087 5 років тому

    Does this method have anything to do with the Bonferroni Correction? Thank you!

    • @how2stats
      @how2stats  5 років тому

      I prefer to apply the Bonferroni correction myself with this type of analysis, if I deem it necessary. I discuss this in Chapter 4 of my (free) www.how2statsbook.com

  • @josephlangston1427
    @josephlangston1427 9 років тому

    Two questions.
    (1.) Why use "adjusted" residual instead of standardized residual here?
    (2.) How did you calculate the Bonferroni corrected p-value?

    • @how2stats
      @how2stats  9 років тому

      +Joseph Langston The adjusted residual takes into account the whole sample size, so it is a more accurate reflection of the effect, in this context. To calculate the adjusted Bonferonni p-values, multiply the observed p-values by the number of analyses you have performed.

    • @josephlangston1427
      @josephlangston1427 9 років тому

      +how2stats In one paper, I found this:
      "MacDonald and Gardner (2000) suggest a Bonferroni adjustment to the z critical of 1.96 (from which the +/- 2 criteria is derived) if the number of
      cells in the contingency table is large. In the Landis et al. (2013) example, there are 15 cells in their 3 x 5 contingency table. Thus, alpha should be set at .05/15 or .003 which translates into a critical value of +/- 2.96 (or approximately +/- 3). However, if the magnitude of the residuals merely serves as a guide to what cells might be of interest, then arguably no adjustment is necessary or one could choose a more conservative alpha value than .05 such as .01 (+/- 2.58). SPSS provides raw, standardized, and adjusted residuals; see Field (2013, pp. 743-744)."
      So that's what confused me; they're saying you can (1) divide .05 by the number of cells, or (2) simply not even do the Bonferroni adjustment. I'm leaning in favor of doing them, but...in SPSS, when you click on z-scores and then Bonferroni corrections, what does SPSS "do"? Because the program doesn't show the actual p-value it is using when it gives the "superscripts" (i.e. a, b) to show statistically significant differences across rows. I'm working on a 4 X 5 table; is SPSS using 6 as a divisor, or 20?

    • @josephlangston1427
      @josephlangston1427 9 років тому

      +Joseph Langston That seems to be the way you're doing it, by dividing .05 by 3 x 4 (12). I thought that, since you can make 6 pairwise comparisons between your SES groups, and a series of comparisons is run for each row, with each having 1 degree of freedom, that you'd divide by 18. Where am I going wrong?

  • @malwinadziwisz1937
    @malwinadziwisz1937 2 роки тому

    How can I report these?

  • @anuragbiswas1
    @anuragbiswas1 2 роки тому

    Why do you mention that degree of freedom in your case is 1? Kindly explain please..need help

  • @yulinliu850
    @yulinliu850 10 років тому

    Excellent! Thanks.

  • @logyehkahengy8124
    @logyehkahengy8124 6 років тому

    really very helpfull , thank you very much

  • @SimonaBREZANOVA
    @SimonaBREZANOVA 6 років тому

    Thanks for this video, it helped me a lot. However, after following your analysis I ended up with adjusted p-values .000 for 17 out of 18... I even increased the decimal number but that didn't help. My sample is very big (1 976 165). I need to prove that people aged 25-34 years-old use more mobile devices more than any other age category. I see that there is a significant difference between the variables DEVICE (mobile, tablet, PC) and AGE (I have 6 groups of age) but I need to figure out if people 25-34 use the mobile more than others. I have aggregated data available only. What should I do with my 0s? :-(

    • @how2stats
      @how2stats  6 років тому +1

      p-values equal to .000 are commonly reported as p < .001, which implies a statistically significant difference.

  • @mau212171
    @mau212171 4 роки тому

    Dear. Very nice . TK

  • @mzamh33
    @mzamh33 9 років тому

    Thank you so much for providing such an informative and accessible demonstration of how to conduct these post-hoc tests - such a life saver!
    A have a question about applying the Bonferroni correction in my study. I am conducting a series of 3 x 2 chi-square analyses, where the first variable is group (3 offender types), and the second reflects the presence or absence of a variable (e.g., mental health contact - Yes or No). This produces the a 6 cell table, however as essentially the same information is obtained from the Yes and No columns (as reflected by identical z values), I do not need to conduct all 6 comparisons; it makes more sense for me to consider the 'Yes' column only. In that case, do I still need to divide by 6 to correct for increased chance of Type I error rate? Or should I be dividing by 3? I've always struggled to get my head around these adjustments conceptually and I'm just coming back into data analysis after quite a break, so any insights you can provide would be greatly appreciated! Thanks :)

    • @how2stats
      @how2stats  9 років тому +2

      +mzamh33 I'd say divide by 3 in this case.

    • @mzamh33
      @mzamh33 8 років тому +1

      +how2stats Thanks for your response, I have divided by three.
      Just another query; is there a way to compute / generate an effect size for each cell so that the magnitude of difference from expected levels (if significant) can be determined? For example, would it be appropriate to calculate r from the adjusted residual value? My apologies if this is way off the mark....

  • @ajitsahu6912
    @ajitsahu6912 2 місяці тому

    How degree of freedom is 1 here

  • @danielv.8138
    @danielv.8138 3 роки тому

    Why did you decide to take one degree of freedom and not 6?

  • @rabbitcarrot4807
    @rabbitcarrot4807 5 років тому

    why is it 12 analysis 0:20 ?

  • @widedboutar4217
    @widedboutar4217 10 років тому

    Hi thank you for the video it was very helpfull. However I have 2 questions: I'd like to dyamize my scoring program. For that, I make a chi 2 test to select the 10 first signifcant variables, that I will putb later on my logistic model.
    1) How can I create a table that will put into one variable the list of the 10 first significant variables of my Chi 2 test?
    2) How can I put into a macro variable this list of significant variables that I will put on my Logistic model ?
    Thank you !

    • @how2stats
      @how2stats  10 років тому

      You don't bother with simple questions, do you? Envoyez-moi un email en français.

    • @widedboutar4217
      @widedboutar4217 10 років тому

      how2stats Oui un peu compliquée ma question... J'ai pu résoudre la moitié du problème. L'autre problème était de savoir comment renvoyer les résultats d'un test de chi 2 dans une table (et non pas lire les résultats dans la log SPSS). Pour in fine garder les meilleures variables et les introduire dans mon modèle (de manière automatique).
      Merci.

    • @how2stats
      @how2stats  10 років тому

      Wided Boutar Given that you are ultimately interested in building a logistic regression model, why not enter all of your potential independent variables of interest into a stepwise logistic multiple regression analysis? It will be more useful statistically to do this, as your method won't take into account the shared variance between your independent variables.

    • @widedboutar4217
      @widedboutar4217 10 років тому

      Cause I have a lot of variables, 400. I need to make a Chi square test at first so I can select the 100 first best P Value you see?

  • @atikhomthienthong9310
    @atikhomthienthong9310 5 років тому

    Can you suggest me how to perform pair comprison case by case?

    • @how2stats
      @how2stats  5 років тому

      Do you mean Pearson Chi-Square partitioning? I discuss that here: ua-cam.com/video/h2dyzAF6hAk/v-deo.html

    • @atikhomthienthong9310
      @atikhomthienthong9310 5 років тому

      @@how2stats Here is what I really want. I'm examining two groups of less and more experienced students. They are required to choose one of three possible choices in each question item (categorical in nature and comparing frequencies in each choice). In this respect, I have to perform a 2 (student) x 3 (choice) chi-square test. I really want to know how I can analyse specific pairs (i.e. Choices A and B, B and C, A and C) that contribute to the overall significant difference. Please given a step by step advice.

  • @wjliu6597
    @wjliu6597 3 роки тому

    I don't get how you interpret this post-hoc results systematically....

  • @ahmedmohamedwafaie1895
    @ahmedmohamedwafaie1895 6 років тому

    Thanks alot
    God bless you

  • @CouttsDesign
    @CouttsDesign 5 років тому +1

    Rows 11 and 12 have standardized residuals greater than 1.96 (-2.09, -2.26), which I thought made them significant, but their p-values (03662, .02382) are below the Bonferroni corrected p-value (.00416). So they're not significant? Can you clarify?

    • @craze4fashion
      @craze4fashion 3 роки тому

      hi, did you figure this out at all? x

    • @CouttsDesign
      @CouttsDesign 3 роки тому

      @@craze4fashion Ugh. Sorry. I don't remember.

    • @craze4fashion
      @craze4fashion 3 роки тому

      @@CouttsDesign dw haha! It was a long shot, I figured it out! Thank you though x

  • @jasonmarb1
    @jasonmarb1 9 років тому

    Say that I was gonna write down the results of your crosstable post hoc. And the result is "art students are more likely to be upper class". Would "N" in my results be total of artstudent(=53) or should it be total of upper class ppl (=77) ?
    As in X(1, N=?)=36,770
    Please help! :/

    • @jasonmarb1
      @jasonmarb1 9 років тому

      Jason Már Bergsteinsson or neither? :p

    • @how2stats
      @how2stats  9 років тому +1

      Jason Már Bergsteinsson Good question. Because all of the expected cell frequencies are based on a ratio that is equal to N (i.e., 263), I would use N = 263 for both the omnibus X2 result and the individual post-hoc X2 results.

    • @jasonmarb1
      @jasonmarb1 9 років тому

      how2stats thanks a lot! :)

    • @jasonmarb1
      @jasonmarb1 9 років тому

      Jason Már Bergsteinsson I just thought since I was comparing only those two groups with 1 df "the others" would not effect that comparison.... My effect size is so small when I use the total N instead of the n between the two groups... even when my "chi " is very large :/.

    • @jasonmarb1
      @jasonmarb1 9 років тому

      how2stats maybe I shouldnt be considering the Cohen (1988) criteria as in "r" but rather the criteria for Cramér's V? Any thoughts?

  • @josephlangston1427
    @josephlangston1427 10 років тому

    Where do you get 1.96 from???

    • @how2stats
      @how2stats  10 років тому +2

      In the z-distribution, 1.96 corresponds to .025, or 2.5%. Taking both negative and positive values, |1.96| corresponds to p = .05.

  • @AARRIIYYAA
    @AARRIIYYAA 3 роки тому

    It is so great video, but so bad idea creat in 3 parts...

  • @SmultronsyltNatha
    @SmultronsyltNatha 4 роки тому

    I did this and found out that apparently, atheists like to hire Muslims more than they like to hire people who haven't mentioned their religion. I expected atheists to hire atheists and Christians to hire Christians, but apparently, no support for those hypotheses...