Stats Apps Tutorials: 23. How to run Linear Mixed Effects Models in SPSS, JASP, and R

Поділитися
Вставка
  • Опубліковано 10 лип 2024
  • Links to video sections and data files are in the description below. In this tutorial video we go through the steps to perform Linear Mixed Effects (LME) analysis using SPSS, JASP and R / RStudio.
    Data files used in this video:
    politeness.sav:
    www.mediafire.com/file/j66yuo...
    politeness.csv:
    www.mediafire.com/file/d4g39u...
    Sections:
    00:00​ Introduction
    00:29 Running LME analysis in SPSS
    03:35 Interpreting LME output in SPSS
    06:07 Running LME analysis in JASP
    10:11 Setting up for LME analysis in R
    13:37 Running LME analysis in R
    16:22 Interpreting LME output from R
    17:35 Plotting the data and looking at pairwise contrasts
    21:34 LME diagnostics in R
    Subscribe here: Subscribe here: / @tonycarlsen1627

КОМЕНТАРІ • 50

  • @charlesgervais3173
    @charlesgervais3173 Рік тому +3

    Only video I've found that clearly explains how to choose random and fixed effects and how to navigate SPSS properly. Thank you so much

  • @MikkoHaavisto1
    @MikkoHaavisto1 2 роки тому

    Thank you for being so clear and consise. Easy to follow along.

  • @lureennaser262
    @lureennaser262 2 роки тому +2

    Wow! what a clear easy to follow tutorial for beginners like me. Thank you so much. You are a life savior.

  •  2 роки тому

    I found the R usage for mixed effects in this video. Thank you Tony.

  • @rongruihuo4275
    @rongruihuo4275 2 роки тому

    TNice tutorials is absolutly the best video of the world you expaining skills are good and it was a honor to see tNice tutorials vid well done

  • @prof.gobindaroy
    @prof.gobindaroy Рік тому

    Thank you, very informative!!

  • @user-nj6eo2in9j
    @user-nj6eo2in9j 10 місяців тому

    thanks very much,it is very helpful!!!!!!!

  • @Robis9267
    @Robis9267 Рік тому

    Thanks! that was useful

  • @shishi1976
    @shishi1976 Місяць тому

    Thank you very much !!!!

  • @MrEgwEdw
    @MrEgwEdw 2 роки тому

    Hello, quick question, in my fixed effects analysis, where fixed factors were time and industry group, both of these categories had p>.05. When writing the equation, do I include them?

  • @rekabuzassy4496
    @rekabuzassy4496 29 днів тому

    Great video! Do you know perhaps how can I plot my significant interaction in SPS LMM?

  • @stinkbomb13
    @stinkbomb13 2 роки тому +1

    how would i do final visualization of a mixed effects model? and are residuals and/or predicted values typically visualized too?..

  • @osamamahmoudm.alomari9491
    @osamamahmoudm.alomari9491 Рік тому

    Thank you for this easy to follow tutorial, but I have a question. You mentioned that "SPSS does not allow us to do post hoc tests for interactions", so how can we do post hoc tests for significant interactions when we have a three-level independent factor???

  • @MaryanneNduka
    @MaryanneNduka Рік тому +1

    nice video, please do you have a SAMPLE document on how to report LMM.

  • @mleoconnor
    @mleoconnor 2 роки тому

    Hi! This video was super helpful. I'm wondering if you would be able to do a similar one where the items are correlated?

    • @ninar3806
      @ninar3806 2 роки тому +2

      Hey Emily,
      I just came from another video that (if I understand correctly) explains that Linear Mixed Models takes care of the violation of independence (that it is, observations are not independent and therefore could be correlated). So you should be okay to run the analysis if your observations aren't independent. For example, comparing the same participants in two experimental conditions.
      If your variables are correlated, that seems to be okay.
      ua-cam.com/video/c_tYZxQLoDA/v-deo.html

  • @theflamekid
    @theflamekid Рік тому +1

    Hi, I have a question. You now use 3 nominal variables in the model, but what if you use a continous scale as variable? How do I put that in in SPSS? Because right now, it does not look right it...

  • @staceyreyes8575
    @staceyreyes8575 2 роки тому

    What version of R is this? I get a warning that says package "lmer" is not available for R version 3.6.3

  • @noiachalom6205
    @noiachalom6205 Рік тому +1

    How do you check for the assumptions?

  • @MomoSimone22
    @MomoSimone22 Рік тому

    Hi there! Great video! Would you have any information on how I can find the standard deviations for my estimated marginal means for the post-intervention results (i.e., time point 2)? SPSS does not give me these descriptives, but I need to report them in my paper.

    • @tonycarlsen1627
      @tonycarlsen1627  Рік тому +1

      Hi Simone, the LME does not calculate SD for the EMM; instead, because of the way the variables are calculated standard error is provided (and 95% CI). Indeed, because there can be multiple random effects, the SD's become less meaningful... see a similar question/response here: www.researchgate.net/post/Looking-for-a-way-to-derive-standard-deviations-from-estimated-marginal-means-using-mixed-linear-models-with-SPSS

    • @alvinkao
      @alvinkao Рік тому

      ​@@tonycarlsen1627and @MomoSimone22 Thank you for asking/answering this question. I have a few follow-up questions:
      1. In the case that SDs become less meaningful and are not generated in the emmeans output from a linear mixed effect model, would you recommend simply using the estimated marginal means and SEs when reporting results? Even if this approach is acceptable, oftentimes the SEs are the same or very similar, and do not look good when creating a descriptive table that includes estimated marginal means and SEs of the dependent variable by each of the factors included in the model.
      2. I have seen some papers taking a different approach to create the descriptive table by reporting the true means and SD calculated by the raw data. This sometimes cause issues because in some cases (i.e., random effect included in the model, missing observation from the dataset), the estimated marginal means and the true means (calculated using the raw data) are different. In the worst, statistically there can be a significant difference (and apparent numerical difference) between two estimated marginal means based on the emmeans output, but the numerical difference between the two true means (calculated by the raw data) are very small. Thus, reporting the true means/SDs would make the results very strange. If this happens, should I report both the estimated marginal means/SEs and true means/SDs, and explain why they are different? Or simply ignoring the true means/SDs to avoid confusion?
      Thank you very much for your time in advance!
      Alvin

  • @barbarasimek5050
    @barbarasimek5050 3 місяці тому

    hi. i have a question. How can we interpret the outputs in jasp?

  • @nanli1708
    @nanli1708 3 місяці тому

    Great video and clear explanations! I have a question. In 20:02, the statistics in lmer in R showed that P values for SexMale and Attitudepolite were 0.00376 and 0.00311, respectively. However, the anova output for the P values of the two factors were 0.006808 (sex) and 0.003402 (attitude). So, what is the difference between the P values from lmer and anova?

    • @tonycarlsen1627
      @tonycarlsen1627  3 місяці тому

      Hey - thanks for the great question! In the summary() of the model, the coefficients and associated p-values represent the individual slope with respect to the "reference category" when the predictors are categorical variables as they are here. So here, R has specified [Female, Informal] as the reference category, so the slope for sexMale =116.195 (p = .00376) is just the slope between [Female, Informal] and [Male, Informal] (by the way, the slope is simply the difference between the means in this example). The same thing applies for attitude polite. This slope is simply the difference between the reference category [Female, Informal] and [Female, Polite]. For the anova() output, it examines the "overall" effect of sex (or attitude) across both levels of the other variable. For example, the effect of attitude is calculated across both sexes, so the effect is different. You can imagine that the effect of attitude is not exactly the same for males as it is for females (if it were, the interaction effect would be p = 1.0). Here is is not "significant" but that does not mean the effect is the same for both sexes either, so the p-values for the individual effect of attitude for females (p=.0031) is different than the effect of attitude across both sexes (p=.0034).
      Hope that helps clarify

    • @nanli1708
      @nanli1708 2 місяці тому

      @@tonycarlsen1627 Thank you so much for the detailed information! That solved me a long lasting confusion... And may I ask for another question. Someone recommends to use Anova in "car" package over the anova command in base package. I myself ran those two commands on the output of lmer, and got different P values. So do you have any idea on the difference between these two? Many thanks again.

  • @melaniebenard7630
    @melaniebenard7630 Рік тому

    Hi there! thank you for your video; I have one question, at 4.46 interpreting the output in SPSS; the F statistic is significant for attitude (p = 0.003) but the t statistic in estimates of fixed effects is not (p = 0.221). How is this possible? I have the same problem with my own data..

    • @tonycarlsen1627
      @tonycarlsen1627  Рік тому

      Hi Melanie - great question, and this is often a bit misunderstood. The Test of fixed effects shows that the overall effect of attitude in the model is significant p = .003. This means that when looking at this factor collapsed across the other factors (sex) the effect is significant. The estimates of fixed effects is giving you a specific comparison. Here it is asking "is the effect of informal attitude different as compared to the reference category (i.e. intercept)?". Here the reference category is Male-polite, so it is comparing male-informal to that category, which as a specific comparison, is not significantly different. Hope that helps!

  • @niziolek933
    @niziolek933 Місяць тому

    Very good tutorial.
    I have a problem with running LMM for estimated data. I have some missing data in my longitudinal experimental study. I've estimated them with SPSS, but then in LMM results referring to Type III Tests of Fixed Effects are presented only for each imputation separately and not for combined data. Do you know any option to get results based on combined data from all (20) imputations?

    • @tonycarlsen1627
      @tonycarlsen1627  Місяць тому

      Thanks for the feedback! So first of all, the LMM is perfectly capable of dealing with missing data (unlike RM ANOVA). Why not try just running the analysis with the data points missing? Some would argue that imputing missing data points can lead to severe bias. If you are dead set on estimating these values, one solution is to calculate a mean of your 20 imputations for each missing data point and then to use that value to "fill in" the data set.

    • @niziolek933
      @niziolek933 Місяць тому

      @@tonycarlsen1627 , thank for your immediate response. In our team we tend to impute missing data to avoid any bias due to them. About calculating a mean from imputed datasets - I am not sure, but I hard that 'combined' data in SPSS is something different than just 'a mean score from imputation'. Am I right, or not?
      Additional question - I also have JASP - is it possible to estimate missing data and conduct LMM there? I saw your tips to conduct LMM in JASP in the video, but I wonder how it looks with missing data.

    • @tonycarlsen1627
      @tonycarlsen1627  Місяць тому

      @@niziolek933 Yes, you can use JASP as shown in the video although I'm pretty sure JASP will not do any imputation, but again, I'd try it using your data set with missing data. On another note, I'm unsure how making up (imputing) data points can lead to *less* bias... my opinion is that you should model your data as they are, not some idealized version of them. Hope that helps

  • @monaadnan3829
    @monaadnan3829 2 роки тому

    How can we test for normality in Linear Mixed Model, in SPSS .. and do we report the p-values of the type 3 tests of fixed effects or the estimates of fixed effects.. Thanks in advance!

    • @tonycarlsen1627
      @tonycarlsen1627  2 роки тому +2

      The assumption of normality in Linear Mixed Models relates to the distribution of the *residuals*. So to do this in SPSS, you need to click on "Save" in the analysis dialog (see this in the video at 3:30 just under EM Means box). A new dialog box will come up where the last option at the bottom is "Residuals." Select the check box. Once you run your analysis, you will have a new column of data which are the residuals, which you can then do test of normality on (or plot a histogram of, etc.). You can also select "Predicted values" in the same dialog box, which you can use to plot the predicted values vs. residuals to examine heteroscedasticity, the same way you can in R (see video at 21:57).
      In terms of reporting, the type III tests of fixed effects would be more akin to what would typically be reported in a mixed ANOVA, so people familiar with normal mixed ANOVA would understand. In contrast, the estimates of fixed effects is more similar to what would typically be reported in a multiple regression. That is, they provide an estimate of what each factor adds to the slope of the model, and whether it is significant.
      Hope that is all reasonably clear.

    • @monaadnan3829
      @monaadnan3829 2 роки тому

      @@tonycarlsen1627 Thank you so much Tony.. that was very clear and much helpful 😊
      One question regarding the normality test. If I have for example 2 intervention groups and 2 measuring points (time 1, 2), do I need to test normality of resd. for both groups at each time point? Or would it be sufficient to test normality of residuals with intervention groups only?

    • @tonycarlsen1627
      @tonycarlsen1627  2 роки тому +1

      ​@@monaadnan3829 The residuals for the full model are what you test. That is, when doing the model with 2 fixed factors (intervention and time), each with 2 levels, you get a single set of residuals. You could run the models on each factor alone and then test the residuals of each, but in your situation I'm not sure that is very helpful.

  • @bingbung
    @bingbung 3 роки тому

    4:50 say your interaction was significant, how are you supposed to show post hoc on this interaction in SPSS?

    • @tonycarlsen1627
      @tonycarlsen1627  3 роки тому +2

      Note here that the interaction is NOT significant - *however* if it was, you would need to use SPSS scripting to perform the post-hoc tests. Do doing a Google search for "spss mixed test subcommand" or see this link: www.ibm.com/docs/ja/spss-statistics/24.0.0?topic=mixed-test-subcommand-command
      That said, I would instead just use JASP or R, as these programs are free and very powerful. Watch the video from 6:09 onward.

    • @bingbung
      @bingbung 3 роки тому

      @@tonycarlsen1627 thank you Tony

    • @suleymanyaman7764
      @suleymanyaman7764 Рік тому

      @@tonycarlsen1627 Is there also a way to follow-up on interactions for LMMs on Jasp? I think the output does not show the direction of interaction here either.

  • @Antonio142010
    @Antonio142010 2 роки тому

    How do I obtain the "Estimated G Matrix" in R?

    • @tonycarlsen1627
      @tonycarlsen1627  2 роки тому

      I think what you may be looking for is the VarCorr function. see rdrr.io/cran/lme4/man/VarCorr.html

    • @Antonio142010
      @Antonio142010 2 роки тому

      @@tonycarlsen1627 thanks man, This for the mixed-effects too or only fixed-effects? I didn´t understand this part.

  • @mollikaroy7424
    @mollikaroy7424 3 роки тому

    Can I apply same command for logistic data?

    • @tonycarlsen1627
      @tonycarlsen1627  3 роки тому +1

      No, but you can use the glmer command (generalized linear mixed effects) from the lme4 package in R to do this, but you will also need to add "family = binomial" to the command. See stats.idre.ucla.edu/r/dae/mixed-effects-logistic-regression/ for more info, or simply do a search for "glmer logistic regression" to see other examples.

    • @mollikaroy7424
      @mollikaroy7424 3 роки тому

      @@tonycarlsen1627 thanks a lot. Can You also pls tell why in jasp it Doesn't show random effect and r it shows. How to explain the variance in random effect?

    • @tonycarlsen1627
      @tonycarlsen1627  3 роки тому

      @@mollikaroy7424 My understanding is that the variance of the random factor ("subject" in our example) tells you how much variability there is between individuals (or whatever the random effect is) across all treatments. Often we don't care much about individual differences except to model them appropriately, so my guess is that is why JASP does not provide them as an output. Fore more see www.r-bloggers.com/2012/11/making-sense-of-random-effects/

  • @suleymanyaman7764
    @suleymanyaman7764 Рік тому

    So, I guess we can't introduce random slopes in SPSS ?

    • @tonycarlsen1627
      @tonycarlsen1627  Рік тому

      Hi Süleyman, I believe that you would have to use the SPSS scripting functions to apply a random slope in SPSS. See the following tutorial which demonstrates how these are specified in the syntax: www.theanalysisfactor.com/spss-genlinmixed/