I really love your presentation very helpful!!! Is t possible to have the reliability of a likert scaled test and a dichotomous response scale in one analysis? thank you so much
Read Peters (2014): The alpha and the omega of scale reliability and validity- Why and how to abandon Cronbach’s alpha and the route towards more comprehensive assessment of scale quality. p57: Researchers and reviewers alike are satisfied by high values of Cronbach’s Alpha (many researchers will cite a value of .8 or higher as acceptable), and in fact, interrelations of items are rarely inspected more closely if Cronbach’s Alpha is sufficiently high. This reliance on Cronbach’s alpha is unfortunate, yet has proven quite hard to correct (Sijtsma, 2009). After Peters, many others have pointed at the shortcomings and general misconceptions of Cronbachs alpha
Herwich, I'm aware of the criticisms of Cronbach's alpha but, as I see it, they miss the point. I believe that, in the real world, these measures are used in a heuristic manner. That is, ANY measure of scale reliability is most often used as a general approximation. The major goal is to make sure that you don't do something disastrously wrong by combining variables that have no business being combined. Any measure of internal consistency will point out these obvious errors. The real criterion, as far as I'm concerned, is whether the combinations of variables make theoretical sense; as long as those theories are not entirely contradicted by whatever analysis you choose to do, then go ahead and combine the variables. In other words: interpretability and practicality trump abstract, statistical purity. There's also the matter of local practice. In my field, social psychology, there is such a strong preference for Cronbach's alpha (as there is for averaging rating scale scores, conducting null hypothesis tests, and so on) that it's usually best to go with the commonly accepted practice, unless your research is ABOUT the commonly accepted practice. But that's all my personal interpretation. I don't mean to be dogmatic. Do what works best for your and your research. Bart
Yes, although it depends on how you want to do that. If you're only comparing two tests and you have a reasonably large sample (maybe n > 15), then you can just use the regular correlation coefficient, r. If you have more than two tests or a small sample or if you want more control, then you can use the GAMLj module. (Here's some information on that module: mcfanda.github.io/gamlj_docs/mixed_example2.html.) And, of course, you can use the Rj module to run any R code you want in jamovi. But, personally, I would start with r.
@@datalabcc How can i calculate Cronbach's alpha and McDonald's omega as a test-retest reliability index? i have a big sample (about 100 subjects) and 10 different test to compare
Maybe. If your scale has a pre-defined structure, either because somebody else developed it that way or you wrote it to be that way, then I think it's fine to go straight to the reliability analysis. Strictly speaking, you'll probably want to do a confirmatory factor analysis instead (which jamovi can also do), but that might be more trouble than it's worth, especially if all you need for your report is the internal consistency. That's my take on it. Bart
Clearly and easily explained, thank you so much!
Thanks a lot for the explanation. I really appreciate it.
Excellent video, thank you!
Great job! Thank you!
I really love your presentation very helpful!!! Is t possible to have the reliability of a likert scaled test and a dichotomous response scale in one analysis? thank you so much
Read Peters (2014): The alpha and the omega of scale reliability and validity- Why and how to abandon Cronbach’s alpha and the route towards more comprehensive assessment of scale quality. p57: Researchers and reviewers alike are satisfied by high values of Cronbach’s Alpha (many researchers will cite a value of .8 or higher as acceptable), and in fact, interrelations of items are rarely inspected more closely if Cronbach’s Alpha is sufficiently high. This reliance on Cronbach’s alpha is unfortunate, yet has proven quite hard to correct (Sijtsma, 2009).
After Peters, many others have pointed at the shortcomings and general misconceptions of Cronbachs alpha
Herwich,
I'm aware of the criticisms of Cronbach's alpha but, as I see it, they miss the point. I believe that, in the real world, these measures are used in a heuristic manner. That is, ANY measure of scale reliability is most often used as a general approximation. The major goal is to make sure that you don't do something disastrously wrong by combining variables that have no business being combined. Any measure of internal consistency will point out these obvious errors. The real criterion, as far as I'm concerned, is whether the combinations of variables make theoretical sense; as long as those theories are not entirely contradicted by whatever analysis you choose to do, then go ahead and combine the variables. In other words: interpretability and practicality trump abstract, statistical purity.
There's also the matter of local practice. In my field, social psychology, there is such a strong preference for Cronbach's alpha (as there is for averaging rating scale scores, conducting null hypothesis tests, and so on) that it's usually best to go with the commonly accepted practice, unless your research is ABOUT the commonly accepted practice.
But that's all my personal interpretation. I don't mean to be dogmatic. Do what works best for your and your research.
Bart
Thank you, very helpful!
excellent
Thank you
Can jamovi be used to calculate test-retest reliability?
Yes, although it depends on how you want to do that. If you're only comparing two tests and you have a reasonably large sample (maybe n > 15), then you can just use the regular correlation coefficient, r. If you have more than two tests or a small sample or if you want more control, then you can use the GAMLj module. (Here's some information on that module: mcfanda.github.io/gamlj_docs/mixed_example2.html.) And, of course, you can use the Rj module to run any R code you want in jamovi. But, personally, I would start with r.
Thank you :) @@datalabcc
@@datalabcc How can i calculate Cronbach's alpha and McDonald's omega as a test-retest reliability index? i have a big sample (about 100 subjects) and 10 different test to compare
Hi how come all my questionnaires are grayed out and that I am not able to use it to the reliability analysis
Check the data type - it might be on auto text and needs to be changed to either integer or decimal
kr-20 and jamovi???
Shouldn't you run a factor analysis first? After that - a reliability test
Maybe. If your scale has a pre-defined structure, either because somebody else developed it that way or you wrote it to be that way, then I think it's fine to go straight to the reliability analysis. Strictly speaking, you'll probably want to do a confirmatory factor analysis instead (which jamovi can also do), but that might be more trouble than it's worth, especially if all you need for your report is the internal consistency.
That's my take on it.
Bart