What if I have four levels? Thank you. 1 = definitely exclude 2 = not sure, but tendency to exclude 3 = not sure, but tendency to include 4 = definitely include
Very informative video, however, I have a question. When manually calculated Kappa was found to be 0.44 while using DATAtab it was 0.23, why this difference?
Thank you so much for your effort. I have 2 questions: Is 0.081 a good Kappa value? Can I run Cronbach's alpha test to assess the agreement between two raters?
0.081 is very bad. There are debates about what is considered good and it also depends on the context of the research. In general, I'd say in most cases everything above 0.5 is an area, in which it gets interesting.
Thank you for great explanations. Found you searching for Fleiss Kappa and now keep watching other videos for the sheer pleasure.
No one can explain obvious like you, perfect !
Many many thanks for your nice Feedback! Regards Hannah
Thank you for such an excellent yet simple explanation 👍
Beautifully explained!
Glad it was helpful!
hi excelent lesson. I would like to know how to proceed when one of the raters's response is only yes or only no for all questions
Can you use cohen kappa when two diffrent instruments are used measuring same thing rather than for individuals
Such a perfect explanation! Thank you ❤
Glad it was helpful!
Many thanks for the clear explanation!
What if I have four levels? Thank you.
1 = definitely exclude
2 = not sure, but tendency to exclude
3 = not sure, but tendency to include
4 = definitely include
Hi, then you can also use Cohen's Kappa, maybe you can also use the weighted Cohen's Kappa.
Hello. Can you do an example where you use Cohen's Kappa but there are 3 raters? Thank you.
Hi, then you use Fleiss Kappa, here is our video: ua-cam.com/video/ga-bamq7Qcs/v-deo.html Regards, Hannah
@@datatab can you do an example where u usw cohen's kappa for 6 raters?
Thank for the great video. Could you perhaps do a video where you use articles to calculate the Kappa.
Great suggestion! I will but it on my to do list!
Can this test be used for more than2 categorical variables by two raters eg depressed, not depressed, unknown
thank you~!! so easy and good explanation!!
Glad it was helpful!
Is Cohen's Kappa can be used for a 5 raters?
Great explanation of the kappa, thank you very much!!!!!! :)
Glad you liked it!
Excelente muchas Gracias
Glad it was helpful and many thanks for your feedback! Regards Hannah
Thank you so muchhhhhh!!!!! A great explanation!
Many thanks!
Very informative video, however, I have a question. When manually calculated Kappa was found to be 0.44 while using DATAtab it was 0.23, why this difference?
How is Cohen's Kappa different from the Cronbach's Alpha?
Thanks for your video. It really helpful
Glad it was helpful! Regards Hannah
Randolph’s K = 0.84 what does it mean
how did u got 0.72?
great explanation
Wonderful Explanation!!! Thanks
Excellent explanation . Tks a lot
Glad you liked it!
@@datatab you are my life-saver for my statistic subject. Tks so much from the bottom of my heart.
Great video! Thanks.
Glad it was helpful! Thanks for your nice feedback! Regards Hannah
Excellent, thank you very much
Glad it was helpful!
Very good video.
Thank you!
Why did you make an example where the sum of different variables is identical, two times 25 and I have a headache.
Oh sorry!
2:15 inter-rater reliability
5:45 calculating
well explained
Many thanks : )
Superb mam
👍
Thank you!
You're welcome! : ) Regards Hannah
very informative
Thank you so much for your effort. I have 2 questions:
Is 0.081 a good Kappa value?
Can I run Cronbach's alpha test to assess the agreement between two raters?
0.081 is very bad. There are debates about what is considered good and it also depends on the context of the research. In general, I'd say in most cases everything above 0.5 is an area, in which it gets interesting.
The video actually talks about good and bad values at 9:25 onwards
Perfect
Thanks!