Determining Inter-Rater Reliability with the Intraclass Correlation Coefficient in SPSS

Поділитися
Вставка
  • Опубліковано 2 жов 2024
  • This video demonstrates how to determine inter-rater reliability with the intraclass correlation coefficient (ICC) in SPSS. Interpretation of the ICC as an estimate of inter-rater reliability is reviewed.

КОМЕНТАРІ • 70

  • @SierraKyliuk
    @SierraKyliuk 5 років тому +27

    My thesis is due in 2 hours--you just saved me so much stress kind sir

    • @jubilent07
      @jubilent07 4 роки тому

      Ahhh I'm in the same boat my friend ahhaha

    • @batmanarkham5120
      @batmanarkham5120 4 роки тому

      Wow two hours :-o

    • @SierraKyliuk
      @SierraKyliuk 4 роки тому +1

      @@batmanarkham5120 my computer crashed and I lost half of my thesis before it was due, it was a rough time

    • @batmanarkham5120
      @batmanarkham5120 4 роки тому

      @@SierraKyliuk oh I hope that you got your thesis cleared

  • @halealkan7940
    @halealkan7940 7 років тому +1

    Thank you so much. I learned the things -unfortunately- I couldn't learn from my statistics teacher. You saved my life writing my dissertation.

    • @DrGrande
      @DrGrande  7 років тому

      You're welcome, thanks for watching -

  • @ryuguji6504
    @ryuguji6504 2 роки тому

    i’m so grateful for this video!!! thank you so much you are one of the reason that make me pass the defense (if i pass T.T) thank youuuu

  • @normallgirl
    @normallgirl 3 роки тому

    Thank you SO much, dr. Grande! This is more than helpful. Bless you!

  • @kemowens6356
    @kemowens6356 4 роки тому +1

    This was extremely helpful. And you used almost the same data I had, so it really helped!

  • @thanhnhanphanthi1344
    @thanhnhanphanthi1344 2 місяці тому

    Thank you so much! Very useful information!❤

  • @bmcvinke
    @bmcvinke 4 роки тому +2

    Thanks for this video! What is the correct APA way to report this analysis? Thanks!

  • @Dalenatton
    @Dalenatton 2 місяці тому

    Dr. Grande, I wonder whether it is possible to use the Intraclass Correlation Coefficient (ICC) with a two-way mixed model to calculate inter-rater reliability when the raters have rated only a subset of the total subjects. For example, Instructor 1 rates all subjects (n = 30), while Instructor 2 rates the first half (n = 15) and Instructor 3 rates the remaining half (n = 15). Thank you so much for your help!

  • @nicolecatubigan
    @nicolecatubigan 5 місяців тому

    Hello i just want to ask, i have 4 raters and they have rated the rubric consisting of 1-4 which are (1=beginning, 2=developing, 3=competent, and 4=accomplished).
    Do i need to put label on their ratings?

  • @Radiology_Specific
    @Radiology_Specific 10 місяців тому

    Thanks for your video. I have a question :
    What does this error indicate ? "Kappa statistic cannot be computed. It requires a two-way table in which the variables are of the same type."
    Despite I utilized a two-way table in which the variables are of the same type (both of them nominal), I get that from SPSS.
    What should I do about that ?

  • @connormacmillan5427
    @connormacmillan5427 Рік тому

    Can you also do this but with ordinal data? Lets say I have a rubric that has been made as the following: bad(1), good (2), very good(3), excellent (4) ?

  • @manarahmed2985
    @manarahmed2985 10 місяців тому

    Thank you. This was helpful, but I have a question. What should I do if the ICC coefficient is less than 0.7, should I delete part of the data or what?

  • @ShafinaIVohra
    @ShafinaIVohra 4 роки тому +1

    So if we have multiple raters here and each rater is rating each participant on a scale of 1-5 for each item, how would that work?

  • @sitimunirah1298
    @sitimunirah1298 2 роки тому

    What if the sig more than 0.05 but got ICC above 0.7.. is that still means excellent agreement?

  • @sofiaskroder5584
    @sofiaskroder5584 7 місяців тому

    Hi Todd! Thank you for a great video. I was wondering if you could use the ICC for determining reliability between both 1) one rater who does the same measurement twice, and 2) two raters who does the same measurement one time each. If so - which numbers in the output shows which answer to my two questions? Many thanks!

    • @sofiaskroder5584
      @sofiaskroder5584 7 місяців тому

      Can I do rater 1 and rater 2 as the same person doing the same measurement twice and rater 3 the other person and doing two separate sets of outputs?

  • @LoriBanosco
    @LoriBanosco 7 років тому +1

    Hi,
    Thanks for the video, it helped a lot.
    Is it possible to do this with more than one variable within just one command? I got more than 400 variables, each of them x3 raters, and don't want to write this command 400 times.
    Thanks from Germany

  • @marianna9371
    @marianna9371 4 роки тому +3

    This was really useful, thank you!

  • @amelamel4335
    @amelamel4335 6 років тому +2

    I have been checking a couple of videos but this is by far the most explanary one. Thank you so much.

    • @DrGrande
      @DrGrande  6 років тому

      You're welcome - thanks for watching

  • @bayushiep
    @bayushiep 3 роки тому

    What's the difference with cronbach alpha?

  • @seekewl5418
    @seekewl5418 3 роки тому +1

    That was extremely helpful, thank you! I have a question, though. If we are gathering ratings of concepts from three raters, for example, could we just take the average of their ratings for one whole rating that represents each concept's overall score? I guess that would be the same case with your example in knowing which score to take. Thanks!

  • @littlefur
    @littlefur 4 роки тому

    Thanks so much! This video exactly solved my problem. May I ask whether SPSS can calculate Fleiss' Kappa as well?

  • @umangternate
    @umangternate Місяць тому

    Thank you, Doc... You saved me

  • @Chocotreacle
    @Chocotreacle 2 роки тому

    What do you do if all the instructors (rafters) have the same score, how do you calculate that. When I tried to do this on SPSS it gave me nothing.

  • @hakanbayezit5908
    @hakanbayezit5908 4 роки тому

    Sir, two raters are assessing 80 essays, and giving a score between 0-20 totally. (content:6, organization:5, language use: 6, punctuation:3) In order to find inter rater reliability , can we use the kappa technique or the technique you teach above? Please advise.

  • @alihyaa_me
    @alihyaa_me 3 роки тому

    How do they rate the variables? Please needed and how do they came out with 1 or 2

  • @Alinka5s
    @Alinka5s 8 років тому +3

    Thank you for the video. It is very helpful!!

    • @DrGrande
      @DrGrande  7 років тому

      I'm glad you found the video useful. Thanks for watching.

  • @AtheerAl
    @AtheerAl 5 років тому +1

    amazing works..

  • @alzalan2001
    @alzalan2001 6 років тому +1

    Thank you for the video, it is really very informative . How I can conduct Bland and Altman from this

    • @DrGrande
      @DrGrande  6 років тому

      You're welcome -

  • @karwanmustafa6633
    @karwanmustafa6633 8 років тому

    Hi Dr. Todd,
    I would be most grateful if you can briefly explain my concern. I have an English speaking exam, to be scored out of 100, and 48 respondents. I have got two raters to score the respondents' English speaking performance. Now I would like to determine how valid the raters' scores are. Could you please explain if need to use correlation coefficient or Cohen's Kappa. As far as I know, I need to use correlation coefficient, but I just wanted to be sure about it. I am thankful if you can explain that in short, please.
    Kind regards,
    Karwan

  • @nicholaslim9078
    @nicholaslim9078 7 років тому

    what if it 0.1 apart, would you still consider as equal. Example, rater 1 = 3.4 vs rater 2 =3.5 ?? help!!

  • @raz2936
    @raz2936 Рік тому

    Don't you eat? You talked in such a low voice that I cannot hear clearly

  • @utaaprepschool428
    @utaaprepschool428 5 років тому

    Can we do the same analysis as yours with 12 instructors?
    Nothing will be changed but there would be 12 instructors instead of 3

  • @drabhijeetghosh
    @drabhijeetghosh 9 років тому

    Hello, I would like to know how to calculate ICC for calculating the clustering effect. As in produce robust 95% confidence intervals (CI) ​while accounting for ​cluster effect. For example I have data for multiple patients, clustered by doctors and by hospitals within which the doctors and patients are; and I want to produce means and other stats for medications etc...but calculate ICC for the groups of hospitals and doctors and adjust for the cluster effects of source hospital and source doctors.

  • @hamidD2222
    @hamidD2222 7 років тому

    If the agreement was found to be low due to one rater giving very different rating compared to the others (≥3 raters in total). How can this "different" rater be identified?
    And is it possible to add the references too please, it help us in case we need further details not explained here it
    Thanks

  • @chrisgamble3132
    @chrisgamble3132 7 років тому

    Hello,
    In a study I am participating there are 10 patients which are measured by 2 raters across 3 seperate measurement occasions. We are measuring a continous variable with 2 different instruments and I intend to use ICC (2,1) to calculate inter-rater reliability. As I understand it, in SPSS I would have to create columns (for the raters) of 10 rows (the patients) for each instrument and variable. However, the 3 measurement occasions are spread out in time. Therefore, for each measurement occasion I need to calculate the inter-rater reliability seperately. This leaves me with 3 values and I want to be able to present 1 value in my report/thesis. How do I go about in determining this value?

  • @esrakutlu4318
    @esrakutlu4318 5 років тому

    Hi, I have a question about my study.
    We asked 5 dıfferent observers to evaluate the degree of a bone dysplasia between 0 to 3. They evaluated same specimens in two different time point. We want to assess the intra and inter-rater reliability. The video that you provide seems to be evaluating 3 different raters, and student notes between 0 to 10. Should I apply the same principles for inter rater reliability for my test? and what should I do for "intra-rater" reliability? Thank you!

  • @chollanotk
    @chollanotk 5 років тому

    What is the exact meaning of Sig (0.000)? Is it p-value by a certain statistical analysis?

  • @SPORTSCIENCEps
    @SPORTSCIENCEps 3 роки тому

    Thank you for the video!

  • @fpires7
    @fpires7 9 років тому

    Hello Todd, I have an analysis with more than two raters and 100 items. Is there a way to look at agreement for each of the items to understand where raters are disagreeing?

  • @iqrakhalid1561
    @iqrakhalid1561 6 років тому

    Pls tell me which method of reliablityi can use when i have 6 teachers and 6 psychologist each member rate each item of my scale

  • @bahadroktay4396
    @bahadroktay4396 6 років тому

    First of all thank you for this helpfull video... I have two questions, if you could answer I'll be very appriciated:
    First of all could you please give me a book referance for cutoff .70 .
    Second, if we have 4 rater and 12 questions (raters could give scores 0 - 10 at for all answers) to evaluate what do I have to do?
    a) 12 different rebiality analysys or
    b) Just one analysys including 12 questions...
    if answer is a, do I need a correction and if some of the interclass correlations are below the 0.70 but most of them are bigger than 0.70 how could I interpret this results?
    Thank you for your kind help...

    • @leeken86
      @leeken86 4 роки тому

      I have the same question. Have you solved your problem? thanks

  • @batmanarkham5120
    @batmanarkham5120 4 роки тому

    But isn’t ICC for continuous data

  • @frohnzy04
    @frohnzy04 4 роки тому

    Can you do a video on how to interpret the ANOVA table along with the ICC - I haven't found a video that includes that . Thank you

  • @putrinurshahiraabdulrapar3175
    @putrinurshahiraabdulrapar3175 7 років тому

    Hye, im doing a research about inter-rater reliability and using the icc for my data analysis. However, my data is ordinal and non-normal distribution. Are my data from icc results is valid?

  • @nicolecarmona4164
    @nicolecarmona4164 8 років тому +2

    Hi Todd, I'm just wondering where you are getting these interpretation ranges from. Your tutorial was extremely useful and I would like to have a citation for my interpretation values! Thank you in advance.

  • @juliethasagun1634
    @juliethasagun1634 4 роки тому

    Is this the adjusted or unadjusted ICC?

  • @VijayKumar-yd2qv
    @VijayKumar-yd2qv Рік тому

    Excellent, nicely described.

  • @mynnzero
    @mynnzero 8 років тому +1

    excellent, thank you!

  • @gmcorpuz09
    @gmcorpuz09 4 роки тому

    Can the ICC be used for 9 raters also?

  • @bradleyfairchild1208
    @bradleyfairchild1208 8 років тому

    As an unexperienced instructor I am always interested in how "strict" or "easy" I am with my grading so this video on seeing how similar the three teachers graded was particularly interesting to me, thanks Dr. Grande.

  • @ddelmoni
    @ddelmoni 5 років тому

    Nice job...thanks!

  • @mioulin
    @mioulin 6 років тому

    Thank you!

  • @suzannahstone
    @suzannahstone 3 роки тому

    THANK YOUUUUUUU

  • @nghiachedinh
    @nghiachedinh 9 років тому

    thank you so much!

  • @saraantonini3661
    @saraantonini3661 8 років тому

    Very helpful! thank youuu

    • @DrGrande
      @DrGrande  7 років тому

      I'm glad you found the video useful. Thanks for watching.

  • @brambonnaerens7133
    @brambonnaerens7133 7 років тому

    big help, thank you!

    • @DrGrande
      @DrGrande  7 років тому

      You're welcome - thanks for watching.