This video is the first video of yours that I’ve come across and I just wanted to say, I absolutely love your teaching and presentation style!! Your enthusiasm and explanation style are so engaging, it’s awesome; and, the way you break things down whilst also simplifying concepts is great, especially because such concepts are generally taught/explained in a much more complex way in university courses, textbooks, and in other UA-cam/online tutorials - together, I feel this all really helps with improving understanding of all concepts discussed. I’m a postgrad student and would have loved to have access to this type of content in my earlier years when learning stats - I must say though, I’ve still learnt some new info from this tutorial!! Would love to see more R programming tutorials like this one - if you’re thinking about posting more, please do because you definitely have the gift of making stats engaging and fun (descriptive words that you don’t usually find when people are talking about stats 😅). Thanks for this content!! 🙌
I am learning mixed effect linear models - could you do a video on how to interpret the outcome of those types of models? I have tons of info on the modeling aspect but not entirely sure how to leverage the output effectively. I appreciate the humor and thoughtfulness in your videos to make them interesting.
after finishing this video, I think i never interpret any model before even though I'm working with data for several years! amazing video, you are a good teacher!
Hi Chloe, this was a fabulous explanation of how GLM works, clear, concise and helped me no end to get to grips with my GLMM on factors affecting pollinators visiting annual bedding plants! thanks so much, not least for the introduction to DHARMa!! More please, love your friendly style.
Hi Fiona, Hope you are good? I came here for this same reason as I am a student working on pollinator interactions and effect of different factors on them. Is there a better way we can connect? I'm looking forward to your reply
Thanks so much for this video, I feel like I have some clarity in understanding GLMs and my outputs so much more now. It feels good to have this confidence!!!
Thank you very much Chloe, you are the best for explaining this tricky things. Please if you can do a video about GLM including interactions among factors
Statisticians like to generalize and GLM is a generalization of lots of survival cases. For example, OLS regression is a surgical case of a GLM with a Gaussian link. Fit an lm() and a Gaussian GLM, and you'll get identical results.
Hi Chloe, just watched this and I have to say thank you so much for speaking in English for all of us not super familiar with statistics. This was so easy to understand, it puts most professors I've had to shame. Any chance you could explain working with a non-normal distribution, interpreting a GLM Poisson? I'm struggling with my data analysis for my thesis :)
Great explanation! thank you so much for the video. Could you please make a video about which glm models can be used when the data is not normally distributed?
Very good. Now lets say you had the 3 flower variables as categorical, and you wanted to generate ORs based on whether the species had long or short (1/0) septal length. How would you do that for a model where the coefficients are expressed as differences off of the reference's coefficient?
I haven't! I just looked it up and it looks pretty cool. It seems very similar to DHARMa but perhaps a bit more flexible, which can be good or bad depending on your handling on stats (for example, I see that you can compare models with different parameters from different datasets within the same call! that seems. . . dangerous. . .and can be super misleading if you don't know what is underlying the output).
For sure! I am slowly but surely making the transition to R by way of SPSS and then Stata, and am constantly amazed at how flexible R can be-for better or for worse! I have only tinkered with check_model(). I like that it offers a vehicle to visually inspect the most salient OLS assumptions at once, and especially love the added guidance of what you should be looking for to guide your interpretation. Merci mille fois !
Thanks for the video. I want to ask you, If I have 2 categorical factors and one numeric response, Can I do a glm? Thank you, I am just starting with glm😅
Hmmmmm, I wouldn't know without looking at your code, but you can check out the code of this video that I have annotated on my GitHub to see if there are any mismatches. github.com/chloefouilloux/GLMOutput/blob/main/GLM_Output.Rmd
It actually works best for GLMMs! More troubleshooting options. Check out their super detailed vignettes here: cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html
Hi there, Thank you for sharing ❤, but i have a question. If the model have multiple predictor, and one of them is continous data. How to change the intecept for that continous variable after i transform the data? Thank you
Hi Suci! Great question. Short answer: (1) First transform the data, and **save it as a new column in your data sheet**, (2) run the model with this updated variable. Long answer (example, lol): Let's say we had mass as a predictor. We have a data frame called *df*. Now, let's say we want to transform mass. I would first load the tidyverse package, and then use the function "mutate" to make a new (transformed) variable! #some code! library(tidyverse) df1% mutate(mass_new = mass-mean(mass)/sd(mass)) #Now, see above, we have our NEW variable called "mass_new. So, all we have to do now is use this in our model! (In the fake code, I have saved it here as a new data frame to avoid confusion) glm( y ~ mass_new + x2, data = df1) The model above will then be using your transformed variable
Thanks for the feedback 😸 I'm working on a follow-up video that might include interactions and other model families. If it's okay could you let me know what info you felt was lacking? I'm always trying to improve on explanations!
@@chloefouilloux ohh thank you so much for the prompt reply. I am just frustrated and confused on how to select the best model for my 7 response variables. Should I use the AIC (via backward selection) to select the best fitted model or should I just use 3 models (of which I selected the explanatory variables, one with only 2, one with 5 and one with 5 explanatory variables+interactions). Please help me what should I do on this. I've done too many researches but they have used different methods and just confused me more. Huhu
@@rubyanneolbinado95 Hi hi! Okay, let me tackle these one at a time. (1) One glm model for 7 predictors is probably not going to be great (especially if there are interactions!). These models tend to be *overfit* which means that you are trying to split your data into too many little boxes-- fewer predictors means more explanatory power (check dharma part of the video-- you can check dispersion of your model using dharma too!). (2) So, how to reduce the number of predictors? Well, you can do the backward selection that you mention, for sure. I don't love to use this method *initially* because it can get rid of the variables you are actually interested in! (because stepwise isn't a biologist, you are!). I would first check if any of your predictors are collinear/autocorrelated! (ex. mass and length are two variables that often are highly correlated-- when you have too much autocorrelation between predictors, they get mad at each other and wreck your model) -- here, you can check correlation between variables *and choose which one is more biologically reasonable* to keep in the model-- drop the other ones. (3) If option 2 isn't working out for you, a GLM just might not be the right model for your data! I would start thinking about a PCA or more advanced modelling, like mixed models. Hope this helps :)
This video is the first video of yours that I’ve come across and I just wanted to say, I absolutely love your teaching and presentation style!! Your enthusiasm and explanation style are so engaging, it’s awesome; and, the way you break things down whilst also simplifying concepts is great, especially because such concepts are generally taught/explained in a much more complex way in university courses, textbooks, and in other UA-cam/online tutorials - together, I feel this all really helps with improving understanding of all concepts discussed. I’m a postgrad student and would have loved to have access to this type of content in my earlier years when learning stats - I must say though, I’ve still learnt some new info from this tutorial!! Would love to see more R programming tutorials like this one - if you’re thinking about posting more, please do because you definitely have the gift of making stats engaging and fun (descriptive words that you don’t usually find when people are talking about stats 😅). Thanks for this content!! 🙌
I am learning mixed effect linear models - could you do a video on how to interpret the outcome of those types of models? I have tons of info on the modeling aspect but not entirely sure how to leverage the output effectively. I appreciate the humor and thoughtfulness in your videos to make them interesting.
after finishing this video, I think i never interpret any model before even though I'm working with data for several years! amazing video, you are a good teacher!
Hi Chloe, this was a fabulous explanation of how GLM works, clear, concise and helped me no end to get to grips with my GLMM on factors affecting pollinators visiting annual bedding plants! thanks so much, not least for the introduction to DHARMa!! More please, love your friendly style.
Hi Fiona,
Hope you are good? I came here for this same reason as I am a student working on pollinator interactions and effect of different factors on them.
Is there a better way we can connect?
I'm looking forward to your reply
This was super useful, not come across the DHARMA package before and its so much simpler than what I was trying to do. Thank you so much!
I will definitely come back to this video. Thanks for sharing!
I just arrived here, and I have to say thank you soooo much for this video!
You are very didactic
Hugs from Brazil 🥰
This is exactly what i needed for my university report. Thank you so much!
Girl!!!! Thanks so much!! I really wanna more videos like this
Thanks so much for this video, I feel like I have some clarity in understanding GLMs and my outputs so much more now. It feels good to have this confidence!!!
Thank you so much for these insights! It helped me interpret the data-analysis of my bachelor's thesis!
Thank you very much Chloe, you are the best for explaining this tricky things. Please if you can do a video about GLM including interactions among factors
This really helped me fill in some knowledge gaps I had about the GLM, thanks so much 😊
You manage to make statistics fun anc cool! wow. Thank you so much. You are great
Thank you very much ! You helped me understand statistics in R
Very good explanation, helpful reminder. And appreciate the tip on the Dharma package.
You are incredible! I enjoy every second I watch your video
I love how you present it :) thank you!
The model is modelling. that´s meme material there.
Thanks for the video Chloe! finally learned some tricks with GLMs
Thank you! Amazing explanation! Really helped me understand key aspects of a GLM. And thanks to the tip on the DHARMa package!
OMG, this is pure gold! Thank you so much
Statisticians like to generalize and GLM is a generalization of lots of survival cases. For example, OLS regression is a surgical case of a GLM with a Gaussian link. Fit an lm() and a Gaussian GLM, and you'll get identical results.
This was really helpfull, clear, and fun to watch ! thank you very much :)
Thank you so much Chloe!
I love your presenting style and your straightforward explanations, thank you! I wonder how your plots are being generated as you go along?
Merci beaucoup pour les explications claires ! Précieux notamment pour juger la validité du glm et ce joli package DHARMa
Omg! Thank youuu ❤ The way you explained.... amazing 😊
Hi Chloe, just watched this and I have to say thank you so much for speaking in English for all of us not super familiar with statistics. This was so easy to understand, it puts most professors I've had to shame. Any chance you could explain working with a non-normal distribution, interpreting a GLM Poisson? I'm struggling with my data analysis for my thesis :)
I will make this the focus of my next video!
Is that video out now? 😊
Thanks Chloe, how can I make the plot as the one you have at 3:06 minutes for the different species?
Thank you great video
Thanks!
Great explanation! thank you so much for the video.
Could you please make a video about which glm models can be used when the data is not normally distributed?
Hello! Thank you for the video! May I ask to explain in details what Estimates mean in GLM please? Or where can I read more about it?
chloe ily this is such a good video
You are great!
Wow, that means the world. Thanks! If there's anything you'd like to learn in data viz, don't hesitate to ask! :-)
Super helpful, what model would use for data that is not normally distributed?
Thanks so much. Do we need to include in the code the distribution?
Very good. Now lets say you had the 3 flower variables as categorical, and you wanted to generate ORs based on whether the species had long or short (1/0) septal length. How would you do that for a model where the coefficients are expressed as differences off of the reference's coefficient?
Loved your video! Have you ever used check_model() from the performance() package?
I haven't! I just looked it up and it looks pretty cool. It seems very similar to DHARMa but perhaps a bit more flexible, which can be good or bad depending on your handling on stats (for example, I see that you can compare models with different parameters from different datasets within the same call! that seems. . . dangerous. . .and can be super misleading if you don't know what is underlying the output).
For sure! I am slowly but surely making the transition to R by way of SPSS and then Stata, and am constantly amazed at how flexible R can be-for better or for worse! I have only tinkered with check_model(). I like that it offers a vehicle to visually inspect the most salient OLS assumptions at once, and especially love the added guidance of what you should be looking for to guide your interpretation. Merci mille fois !
I would have loved to have found that vid like a week ago.
Thanks for the video. I want to ask you, If I have 2 categorical factors and one numeric response, Can I do a glm? Thank you, I am just starting with glm😅
Hi, why is R studio producing different results even though I am using the same call and data.
Hmmmmm, I wouldn't know without looking at your code, but you can check out the code of this video that I have annotated on my GitHub to see if there are any mismatches. github.com/chloefouilloux/GLMOutput/blob/main/GLM_Output.Rmd
What if I want to test interections (they were significant for Tukey test)? What commends should I write?
is DHARMA only for GLM's? Is there something similar for GLMM's? great video!
It actually works best for GLMMs! More troubleshooting options. Check out their super detailed vignettes here: cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html
Hi there, Thank you for sharing ❤,
but i have a question.
If the model have multiple predictor, and one of them is continous data.
How to change the intecept for that continous variable after i transform the data?
Thank you
Hi Suci! Great question. Short answer: (1) First transform the data, and **save it as a new column in your data sheet**, (2) run the model with this updated variable. Long answer (example, lol): Let's say we had mass as a predictor. We have a data frame called *df*. Now, let's say we want to transform mass. I would first load the tidyverse package, and then use the function "mutate" to make a new (transformed) variable!
#some code!
library(tidyverse)
df1%
mutate(mass_new = mass-mean(mass)/sd(mass))
#Now, see above, we have our NEW variable called "mass_new. So, all we have to do now is use this in our model! (In the fake code, I have saved it here as a new data frame to avoid confusion)
glm( y ~ mass_new + x2, data = df1)
The model above will then be using your transformed variable
Thank you for your answers, i’ll try it first 🙏🙏☺️
Hi Chloe what’s your background (profession) ? Academics or …. ???
Hi! I am in academia, yes! Which is why the videos are quite irregular, but I am going to try to get one up before the holidays!
Corwin Extensions
thank you for the information.
Thanks for the feedback 😸 I'm working on a follow-up video that might include interactions and other model families. If it's okay could you let me know what info you felt was lacking? I'm always trying to improve on explanations!
🤐🤐🤐🤐🤐🤐🤐
@@chloefouilloux ohh thank you so much for the prompt reply. I am just frustrated and confused on how to select the best model for my 7 response variables. Should I use the AIC (via backward selection) to select the best fitted model or should I just use 3 models (of which I selected the explanatory variables, one with only 2, one with 5 and one with 5 explanatory variables+interactions). Please help me what should I do on this. I've done too many researches but they have used different methods and just confused me more. Huhu
@@chloefouilloux one more things please. Is it okay to use just one model for my different 7 response variables?
@@rubyanneolbinado95 Hi hi! Okay, let me tackle these one at a time. (1) One glm model for 7 predictors is probably not going to be great (especially if there are interactions!). These models tend to be *overfit* which means that you are trying to split your data into too many little boxes-- fewer predictors means more explanatory power (check dharma part of the video-- you can check dispersion of your model using dharma too!). (2) So, how to reduce the number of predictors? Well, you can do the backward selection that you mention, for sure. I don't love to use this method *initially* because it can get rid of the variables you are actually interested in! (because stepwise isn't a biologist, you are!). I would first check if any of your predictors are collinear/autocorrelated! (ex. mass and length are two variables that often are highly correlated-- when you have too much autocorrelation between predictors, they get mad at each other and wreck your model) -- here, you can check correlation between variables *and choose which one is more biologically reasonable* to keep in the model-- drop the other ones. (3) If option 2 isn't working out for you, a GLM just might not be the right model for your data! I would start thinking about a PCA or more advanced modelling, like mixed models. Hope this helps :)
I am the 1000K likes fitted person for this video! R sq = 1K!