Great information. What are your thoughts on manually correcting for measurement unreliability for each observed variable with the (1-[α of X])*(Var of X) in SEM? This correction might be useful in situations where you cannot use latent variables (e.g. insufficient sample size).
Hello Kiet, Thank you for watching! I'm assuming you mean using single indicators in SEM and manually fixing their loadings to 1 and fixing their error variance parameters according to the formula that you posted, right? I think that that can work well as long as you have appropriate reliability estimates that apply to your data/sample at hand (same issue as with Spearman's computational correction for attenuation formula: the reliability estimates need to be accurate so as not to over- or undercorrect the correlation). Cronbach's alpha (as shown in your formula) may or may not apply depending on the nature of the variables. Specifically, Cronbach's alpha is only appropriate for essentially or strictly tau-equivalent measures. With the model-based multiple-indicator CFA approach that I showed in the second half of the video, the assumption of essential and/or strict tau-equivalence is testable through constraints on model parameters (equal loadings for essential tau-equivalence; equal loadings and equal intercepts for strict tau-equivalence). However, even when the variables have unequal loadings (congeneric measures), the multiple-indicator CFA approach still works, whereas your formula with Cronbach's alpha would likely lead to an overcorrection. (Cronbach's alpha underestimates reliability for congeneric measures, leading to an overcorrection.) An advantage of the multiple-indicator, model-based CFA approach that I showed in the second half of the video is that the reliability estimates are "built in" (inferred directly from the multiple indicators by estimating their error variance components as free model parameters) so that you can be sure the correction for attenuation (true score correlation estimate) is accurate as long as the model as a whole fits the data (which is testable via chi-square test of model fit). Also, since you mentioned sample size issues, using more (rather than fewer) indicators can actually often compensate for a smaller sample size. That is, all other things being equal, having more indicators tends to be better (as long as all indicators are good indicators/unidimensional within each factor). In conclusion, I would typically prefer the multiple-indicator, model based CFA approach that I showed in the video, in which you can freely estimate the error variance parameters, because the multiple-indicator approach is more general and flexible. I hope this helps! Christian Geiser
Thank you so much
Great information. What are your thoughts on manually correcting for measurement unreliability for each observed variable with the (1-[α of X])*(Var of X) in SEM? This correction might be useful in situations where you cannot use latent variables (e.g. insufficient sample size).
Hello Kiet, Thank you for watching!
I'm assuming you mean using single indicators in SEM and manually fixing their loadings to 1 and fixing their error variance parameters according to the formula that you posted, right? I think that that can work well as long as you have appropriate reliability estimates that apply to your data/sample at hand (same issue as with Spearman's computational correction for attenuation formula: the reliability estimates need to be accurate so as not to over- or undercorrect the correlation).
Cronbach's alpha (as shown in your formula) may or may not apply depending on the nature of the variables. Specifically, Cronbach's alpha is only appropriate for essentially or strictly tau-equivalent measures. With the model-based multiple-indicator CFA approach that I showed in the second half of the video, the assumption of essential and/or strict tau-equivalence is testable through constraints on model parameters (equal loadings for essential tau-equivalence; equal loadings and equal intercepts for strict tau-equivalence). However, even when the variables have unequal loadings (congeneric measures), the multiple-indicator CFA approach still works, whereas your formula with Cronbach's alpha would likely lead to an overcorrection. (Cronbach's alpha underestimates reliability for congeneric measures, leading to an overcorrection.)
An advantage of the multiple-indicator, model-based CFA approach that I showed in the second half of the video is that the reliability estimates are "built in" (inferred directly from the multiple indicators by estimating their error variance components as free model parameters) so that you can be sure the correction for attenuation (true score correlation estimate) is accurate as long as the model as a whole fits the data (which is testable via chi-square test of model fit). Also, since you mentioned sample size issues, using more (rather than fewer) indicators can actually often compensate for a smaller sample size. That is, all other things being equal, having more indicators tends to be better (as long as all indicators are good indicators/unidimensional within each factor).
In conclusion, I would typically prefer the multiple-indicator, model based CFA approach that I showed in the video, in which you can freely estimate the error variance parameters, because the multiple-indicator approach is more general and flexible.
I hope this helps!
Christian Geiser
@@QuantFish thanks for the explanation!
Thank you so much for this information. I'm studying for the board exam for psychometricians I hope I pass lol
Good luck with your exam!
Best, Christian Geiser