Very nice video, thank you. I'm wondering how to model the covariance structure in a bayesian longitudinal setting, similar to covariance patterns such as compound symmetry, autoregressive, Topelitz etc. in the frequentist world. In the frequentist world, taking serial correlation into consideration narrows the confidence intervals of the parameters. How to model the covariance structure in a bayesian longitudinal setting? I'm wondering if a bayesian intercept always introduces compound symmetry, similar to a random intercept in a frequentist linear mixed effects model? I suspect taking serial correlation would narrow the posterior distributions of the model parameters, strengthening the bayesian inference. However, I'm not at all sure if my thoughts are anywhere near correct. The brms package seems like a very valuable resource. However, the parts about covariance structures seem to be still in progress. If anyone has good theoretical (and why not practical) bayesian references regarding these covariance modeling issues (serial correlation etc.), I would appreciate them very much.
One of the many good things about the brms and rstanarm packages is that they use the same syntax as the lme4 package. Any thing you want to play around with using known data, you can do it quickly in lme4 and move to brms or rstan (with rstanarm) to use a Bayesian approach.
@@hankstevens7628 Thank you. I recently ran into 'Applied longitudinal data analysis in brms and the tidyverse' by Solomon Kurz. Based on that release I understood that the error covariance structure is under construction in brms. It seems that I got it wrong. Based on your comments, the package itself already makes it possible to modify the error covariance structure. I have to dive more deeply into the vignettes and reference manual. A book about brms would be priceless for a person like me (not being fully able to quickly absorb information from the vignettes and/or reference manual). But as you pointed out, since the syntax is the same as for lme4, maybe the move would be to get my hands on frequentist lme4 books to learn the syntax. Then just move seamlessly to brms :)
This was awesome, I was wondering how would we go about plotting a certain regression/data-fit from brm() using ggplot2? (Let's say I want to show the data-fit over the actual data points)
great explanation hank! Do you by chance have other videos/lectures/blogs where you discuss more complicated cases? By complicated cases, I mean cases where covariates are correlated with predictors or confounded on a certain variable or mediation or moderation.
The purpose of this video is to show rank beginners that doing Bayesian regression in R is almost as easy as REML. To specify hyperparameters depends on what those are for. If you just want to estimate a hyperparameter as a variance component, just specify the random effect as you would in lmer().
This was great! Please keep on sharing. I've just begun my brms journey, and this is really helpful. I also appreciate your pedagogical talent.
There are tons of great examples out there. I recommend material by Richard McElreath, but other folks have more cookbook-like examples.
This was very useful! Thank you, Hank! :)
Very nice video, thank you.
I'm wondering how to model the covariance structure in a bayesian longitudinal setting, similar to covariance patterns such as compound symmetry, autoregressive, Topelitz etc. in the frequentist world. In the frequentist world, taking serial correlation into consideration narrows the confidence intervals of the parameters.
How to model the covariance structure in a bayesian longitudinal setting? I'm wondering if a bayesian intercept always introduces compound symmetry, similar to a random intercept in a frequentist linear mixed effects model? I suspect taking serial correlation would narrow the posterior distributions of the model parameters, strengthening the bayesian inference. However, I'm not at all sure if my thoughts are anywhere near correct.
The brms package seems like a very valuable resource. However, the parts about covariance structures seem to be still in progress.
If anyone has good theoretical (and why not practical) bayesian references regarding these covariance modeling issues (serial correlation etc.), I would appreciate them very much.
One of the many good things about the brms and rstanarm packages is that they use the same syntax as the lme4 package. Any thing you want to play around with using known data, you can do it quickly in lme4 and move to brms or rstan (with rstanarm) to use a Bayesian approach.
@@hankstevens7628 Thank you.
I recently ran into 'Applied longitudinal data analysis in brms and the tidyverse' by Solomon Kurz. Based on that release I understood that the error covariance structure is under construction in brms. It seems that I got it wrong. Based on your comments, the package itself already makes it possible to modify the error covariance structure.
I have to dive more deeply into the vignettes and reference manual. A book about brms would be priceless for a person like me (not being fully able to quickly absorb information from the vignettes and/or reference manual).
But as you pointed out, since the syntax is the same as for lme4, maybe the move would be to get my hands on frequentist lme4 books to learn the syntax. Then just move seamlessly to brms :)
This was awesome, I was wondering how would we go about plotting a certain regression/data-fit from brm() using ggplot2? (Let's say I want to show the data-fit over the actual data points)
Here is a nice example of doing that: cran.r-project.org/web/packages/tidybayes/vignettes/tidy-brms.html#posterior-predictions-kruschke-style
Code
,
finally!!!
great explanation hank! Do you by chance have other videos/lectures/blogs where you discuss more complicated cases? By complicated cases, I mean cases where covariates are correlated with predictors or confounded on a certain variable or mediation or moderation.
Sorry to ask; but how does one specify hyper parameters in brms?
I think it is a bit more complicated than '|' vs '||'
The purpose of this video is to show rank beginners that doing Bayesian regression in R is almost as easy as REML. To specify hyperparameters depends on what those are for. If you just want to estimate a hyperparameter as a variance component, just specify the random effect as you would in lmer().