Simple Explanation of Mixed Models (Hierarchical Linear Models, Multilevel Models)

Поділитися
Вставка
  • Опубліковано 19 жов 2024

КОМЕНТАРІ • 22

  • @ghaiszriki7912
    @ghaiszriki7912 2 місяці тому +2

    As Einstein said: "If you can't explain it simply, you don't understand it well enough"
    Very clear and simply explained, thanks a million.
    💚

  • @Jake-nl1jm
    @Jake-nl1jm 17 днів тому +1

    One suggestion to make sure is in your lessons on random effects could be to clarify the difference between a random effect and an interaction, although it may be too much to go into a full explanation I feel like a disclaimer/warning could at least be valuable. I feel like you explained random effects very well, but an inexperienced person coming into this may follow the theoretical explanation and then when you show the plots think, "OH! I know how to do that, you run 'lm(y~x1*x2), vs lm(y~x1+x2)!' and then be in for some pain later. I remember learning stats and having that misconception for a brief period.
    Thank you for the content you produce, it is valuable and appreciated, your content is excellent for learning and great for back-to-the-basics review.

  • @gabewinterful
    @gabewinterful 2 місяці тому +1

    I wish this video was out four years ago when I started analyzing my phd data, but glad to see it before the defense so I have some more confidence in explaining the analysis I’ve done in simpler words 😊 thanks a lot!

  • @WeirdPatagonia
    @WeirdPatagonia 2 місяці тому +4

    Hello! Thank you for your video! Greetings from Chile :) That said, I have studied mixed models a bit and I still don't understand why someone would want a fixed intercept or a fixed slope. I know that if you assume the effect is always the same (like calorie consumption and weight gain), you could use a fixed slope. OK. But anyway, if you use random slopes in this situation, these slopes should be really similar, so it wouldn't make such a big difference, right? Why don't we just use random slopes and random intercepts all the time? If they are similar for each group, it will be OK, and if they are different for each group, great, we modeled it. Is there any advantage of a fixed slope over a random one?

    • @QuantPsych
      @QuantPsych  2 місяці тому +6

      Yes, there's an advantage. You're estimating one less parameter, save one degree of freedom, your standard errors shrink, and the model is easier to estimate. If you can fix it, always fix it.

    • @WeirdPatagonia
      @WeirdPatagonia 2 місяці тому +2

      @@QuantPsych Crazy. Thanks for your answer

  • @icupsy5830
    @icupsy5830 2 місяці тому +1

    Thanks for your fantastic videos! The simpson's paradox often "solved" by adding an interaction term (X*cluster) in GLM and then conduct separate GLMs in each cluster in some psychological studies. Could you please help me clearify the differences between this method and HLM or MVM? Thanks!

    • @Lello991
      @Lello991 2 місяці тому +2

      An interaction term is different from a random effect on several levels: First, they serve two different purposes: an interaction term is needed when you're primarily interested in checking whether the effect of your predictor X is different (or remains significant) for different clusters. A significant interaction tells you that the effect of X varies significantly across the cluster's levels. Typically, when you find a significant interaction, you don't discuss the main effect of X (it's biased by definition) and you proceed by doing what can be called simple effect analysis or simple slope analysis. Namely, you measure the effect of X at each level of your cluster. So, if you have 3 clusters, you end up with 3 parameters and significance levels: Ex.: the effect of X for cluster 1 is b=0.5, p < .001, for cluster 2 is b=0.2, p = .07, and so on.
      Mixed effects don't do such a thing. They're not meant to check if the effect of X varies across clusters, or at least they don't give you a significance level for it (you can test the significance of random effects using likelihood ratio tests or other statistical methods to compare models with and without specific random effects, but it's a different thing). The extent to which the effect of X varies across clusters (variability) is incorporated into the model's random structure. Mixed models estimate the average effect of X across all clusters, while accounting for random variations in intercepts and slopes, which is way more informative than GLMs if you're interested in the main effect. Usually, clusters are participants' IDs, so way higher in number as opposed to what you'd use in a GLM with an interaction term. I hope this is helpful, and @Quant Psych approves =)

  • @charlieivarsson2080
    @charlieivarsson2080 2 місяці тому +1

    Could you show how a mixed model is used to evaluate a pharmacological effect over time. Let's say a psychiatric drug at week 0, 3, 9 and 12? How do you tell if the difference is significant?

  • @luisa1551
    @luisa1551 2 місяці тому

    I have a question: in microbiology we work with strains, which are clones and genetically identical within a strain; same in Cancer research when we work with specific cell lines. If I understood you right, then the results are not independent if we use the same strain or same cell line for our biological assays?

    • @olenapo4895
      @olenapo4895 Місяць тому

      I don't think so. The readouts of every in vitro assay would be independent as it is continuous data (release of cytokine or protein level) which is affected by many parameters of your experimental setup, even you use same cell line. In current example as it is about survey the responses collected as a score and there are social factors which determine why they are dependent. Asking twins about their opinions about Trump doesn't mean that they can run distance with same pace

  • @sjrigatti
    @sjrigatti 2 місяці тому

    How is a mixed effects model with random slopes and intercepts different from just fitting 3 different linear models, one for each cluster?

  • @ast3362
    @ast3362 2 місяці тому

    But we do fixed intercepts when we have categorial data modeled by dummy variables right? 14:45

  • @Salvador_Dali
    @Salvador_Dali 2 місяці тому

    if you normalize the data to observe the relative change e.g., i guess it makes sense to fix the intercept, right?

    • @nosaosawe3158
      @nosaosawe3158 2 місяці тому

      I don't think so. The normalized data would still take difference intercepts for each covariate

  • @qwerty11111122
    @qwerty11111122 2 місяці тому

    12:58 you cant interpret a linear effect on its own when its square is significant in the model, right? Wouldnt this relate to fixing the intercept while allowing the slope to vary?

    • @QuantPsych
      @QuantPsych  2 місяці тому

      Correct with the first question. The two (linear and quadratic component) need to be interpreted together. I'm not following your second question.

  • @LucaSubitoni
    @LucaSubitoni 2 місяці тому

    Is it possible to fit a Linear Mixed Effect model using a binary predictor (e.g. time factor: pre vs post) and then compute the significance of this factor?
    I read about the Satterthwaithe method which could be used to estimate the p value of the fixed model coefficients, is this correct?

    • @LucaSubitoni
      @LucaSubitoni 2 місяці тому

      Like the pre vs post must be paired

    • @nosaosawe3158
      @nosaosawe3158 2 місяці тому +1

      Yes is the answer to your first question

    • @LucaSubitoni
      @LucaSubitoni 2 місяці тому

      @@nosaosawe3158 thank you very much