Calculating and Interpreting Cohen's Kappa in Excel

Поділитися
Вставка
  • Опубліковано 26 лип 2024
  • This video demonstrates how to estimate inter-rater reliability with Cohen’s Kappa in Microsoft Excel. How to calculate sensitivity and specificity is reviewed.

КОМЕНТАРІ • 42

  • @Muuip
    @Muuip 5 років тому +1

    Great concise presentation, very useful!Much appreciated!

  • @LukyDi
    @LukyDi 8 років тому

    Thank you so much for the well explained video ,
    It really helped me very much .
    You are an excellent teacher.

  • @emilyhughes1315
    @emilyhughes1315 4 роки тому +2

    Super helpful and clear thank you!

  • @alejandrabeghelli38
    @alejandrabeghelli38 8 років тому

    Thanks! You saved me a lot of time.

  • @ringwormts115
    @ringwormts115 8 років тому

    Thanks for this very good video. The excel functions make my life so much easier :-)

  • @soleilaugust
    @soleilaugust 8 років тому +1

    thank you, very helpful!

  • @DavidKirschnerPhD
    @DavidKirschnerPhD 3 роки тому

    Super useful, helped me calculate interrater reliability for program assessment of student literature reviews. Thanks!

  • @ceccapaglia
    @ceccapaglia 4 роки тому +1

    Great! Really useful! Thank you

  • @sofiaquijada7398
    @sofiaquijada7398 2 роки тому

    Thank you so much, this is exactly what I've been looking for.

  • @ashoklodhi1910
    @ashoklodhi1910 9 років тому

    It's really helpful...

  • @derejebirhanu7098
    @derejebirhanu7098 5 років тому +1

    the very important person
    Thank you!!

  • @zacrogers3975
    @zacrogers3975 8 років тому

    Thanks Todd! This is great.
    @Ben van Buren - Cohen's Kappa is used in many academic articles but it did not originate there. It's actually from the Cohen & Cohen book from 1960. I'm using a more recent version, the citation is:
    Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2013). Applied multiple regression/correlation analysis for the behavioral sciences. Routledge.

  • @SPORTSCIENCEps
    @SPORTSCIENCEps 3 роки тому

    Thank you for uploading it!

  • @is4220
    @is4220 8 років тому

    such a wonderful and helpful video! Thanks a lot!

    • @DrGrande
      @DrGrande  7 років тому

      I'm glad you found the video useful. Thanks for watching.

  • @medicine6932
    @medicine6932 7 років тому +1

    THANK YOU SO MUCH!!!! On a time crunch and SPSS seems like it'd take too much time to even learn to use @_@. This video really helped.

    • @DrGrande
      @DrGrande  7 років тому

      You're welcome - thanks for watching.

  • @houchj0372
    @houchj0372 2 роки тому

    very clear, thank you

  • @charlesdrehmer87
    @charlesdrehmer87 3 роки тому

    Thank you for your video. Could you explain how to handle ratings that are missing, where one rater recorded a score and the other did not?

  • @GamingBoxChannel
    @GamingBoxChannel 7 років тому

    wish my lecturer could explain like you

  • @ruzelasesoria5891
    @ruzelasesoria5891 6 років тому

    excellent videio!!! thanks....!

  • @tyrk2926
    @tyrk2926 4 роки тому

    Many Thankssss

  • @MarkVanderley
    @MarkVanderley 8 років тому

    I imagine this would be helpful in research pertaining to rating the acquisition of counseling skills in student counselors

  • @greggelliott4570
    @greggelliott4570 8 років тому

    Probably need a little more discussion of sensitivity and specificity, although I expect it's also addressed in some other videos and in the book.

  • @antonioflores6148
    @antonioflores6148 3 роки тому

    Thank you!!!

  • @franciscocallebernal
    @franciscocallebernal 2 роки тому

    hi! how do you calculate confidence invervals and standard error for the kappa values using excel? Thank you for your very helpful video

  • @ruudparklimy
    @ruudparklimy 8 років тому +15

    what if I have more than 3 values? e.g. not just 0 and 1, but 2, 3, 4 or even more?

  • @mrd8300
    @mrd8300 2 роки тому

    excellent

  • @yourspecial-child3541
    @yourspecial-child3541 7 років тому

    Hi Todd. Can you do Fleiss' Kappa in Excel as well?

  • @sparkly5031
    @sparkly5031 7 років тому +1

    Very informative. However, what do you do when:
    a) the Pe is 1 (and then the equation is divisible by 1)? Assume the Kappa is 1?
    b) have a very low Kappa when the raters agree on all but one of the ratings? Surely it should be higher? I have 2 raters,, 20 subjects. If they agree on 19, and differ on 1, the Kappa is nearly 0.

  • @westbourne94
    @westbourne94 7 років тому

    Can this test be used to measure reliability of categorical data?

  • @Lyn-eg3id
    @Lyn-eg3id 7 років тому

    Suppose you have five categories from low to high. Since it is not dichotomous as it is here, do you still use the same approach?

  • @is4220
    @is4220 8 років тому

    Dear Dr. Grande,
    i have maybe a simpe question. But Reseacher and RA are people, which give their responses to the survey f.a.,right?! So, this number can be very high then. And I´ve got 5 criteria like satisfactory etc . But I think i´ve understood how to do this. I should probably split people , giving responses into groups, in order to come up with the coefficient.

  • @jorgemmmmteixeira
    @jorgemmmmteixeira 3 роки тому

    Hi. What about calculating sample size for Kappa? Do you think it is
    problematic to set the null hypothesis at K=0.0? I believe this would
    be the same at what others call to set K1=0.0, when many state the K1
    should be the minimum K expected. Thanks

  • @MrJsanabria
    @MrJsanabria 3 роки тому

    Dr. Grande: what would you do if the Kappa agreement turns to be too low? Should both coders recode the material in order to match and increase the value? Or what do you suggest? Thanks in advance.

    • @RozgarOmar
      @RozgarOmar 2 роки тому

      It depends on the matter. In my field it is necessary, when the mean value turn out too low, a discussion is held to talk about troubleshooting. And then re-code the material.

  • @ProfGarcia
    @ProfGarcia 2 роки тому

    I have a very strange Kappa result: I have checked for a certain behavior in footage of animals, which I have assessed twice. For 28 animals, I have agreed 27 times that the behavior is present and have disagreed only once (the behavior was present in the first assessment, but not in the second). My data is organized as the following matrix:
    0 1
    0 27
    And that gives me a Kappa value of zero which I find very strange because in only 1 of 28 assessments I disagree. How come it is considered these results as pure chance?

  • @alisalm5022
    @alisalm5022 8 років тому

    Do the resercher and the resercher assestance should have the same experince?or no

  • @zenmedia3782
    @zenmedia3782 4 роки тому

    then you take Kendall out for a spin

  • @Brolnox
    @Brolnox 7 років тому

    A bit slow-paced but otherwise an excellent video, thanks.