Here you go: osf.io/mjzyw I actually recorded these screencasts specifically for that paper. I will link it to the video description after it is published. (Or if it is published)
Thanks for the video especially the parallel processing command. I have two questions. 1: isn't better to simulate the standard errors separately instead of calculating them from the simulated beta estimates? They might not be always identical. 2: How to store whether an SEM model converged or not when simulating? I am currently (probably) using a sub-optimal way to do this. I am currently simulating the RMSEA and Chi-square to see if they are missing and also checking whether standard errors are missing. Is there any way to directly store the convergence error in the simulations' files as a separate variable?
1) I do not understand the first question. SEs are calculated based on the simulated datasets. 2) You can get convergence status from e(converged). For example. run webuse census13 sem (
@@mronkkoThanks. I meant that instead of getting the standard deviation of estimates by the command summarize, we directly simulate the standard error for beta from each sample and store them. That is instead of (simulate _b) and using the summarize command to get the standard deviation of bs, we directly simulate the standard errors for each sample (simulate _b _se). Do they differ?
@@Rezayyyyyyyyy It depends on what you want to study with the Monte Carlo simulation. If you want to study the precision of estimate, you look at the SD of the estimates and ignored the SEs. If you want to study whether the SEs are unbiased, you store the SEs and compare the average SE against the SD of the estimates.
Thank you Professor for wonderful video! It would be even better if you prepare video from your paper of simulation! It will be productive for us!
Here you go: osf.io/mjzyw
I actually recorded these screencasts specifically for that paper. I will link it to the video description after it is published. (Or if it is published)
Thank you Mikko!
You are welcome!
Thanks for the video especially the parallel processing command. I have two questions. 1: isn't better to simulate the standard errors separately instead of calculating them from the simulated beta estimates? They might not be always identical. 2: How to store whether an SEM model converged or not when simulating? I am currently (probably) using a sub-optimal way to do this. I am currently simulating the RMSEA and Chi-square to see if they are missing and also checking whether standard errors are missing. Is there any way to directly store the convergence error in the simulations' files as a separate variable?
1) I do not understand the first question. SEs are calculated based on the simulated datasets.
2) You can get convergence status from e(converged). For example. run
webuse census13
sem (
@@mronkkoThanks. I meant that instead of getting the standard deviation of estimates by the command summarize, we directly simulate the standard error for beta from each sample and store them. That is instead of (simulate _b) and using the summarize command to get the standard deviation of bs, we directly simulate the standard errors for each sample (simulate _b _se). Do they differ?
@@Rezayyyyyyyyy It depends on what you want to study with the Monte Carlo simulation. If you want to study the precision of estimate, you look at the SD of the estimates and ignored the SEs. If you want to study whether the SEs are unbiased, you store the SEs and compare the average SE against the SD of the estimates.