Hi Brandon! Thank you so much for making such detailed playlist on statistics. I finished the entire statistics playlist and now I am more confident about statistics. Thank you so so much!!
Hi Brandon, I am not quite good at statistics before, but your lecture of linear regression, anova are so clear and easy to understand. I learn all these video before my Advanced Statistic course, it really helps me a lot, and finally I got 9/10 in AS which I never expected.
Also, great series, I really enjoyed this one and following along in R Studio trying out what you're suggesting here. The visualizations that show how the data are extended or compressed to normalize it somewhat were quite helpful and gave the insight into why these particular functions are used. Very cool. Cheers!
So when choosing predictors to include in a model, if one is using a technique like best subset, forward/backward stepwise selection, does it make more sense to do the transformations first, then perform the predictor selection, or pick the predictors first then look to see if transformations improve the model further. It seems like the latter to me as I notice that the predictor significance doesn't change much when a transformation is applied. In the Boston data, for example, it seems like age often has a large p-value in many model variants. It correlates pretty strongly with lstat, for example, and so maybe refining the model predictors first, then looking for opportunities to tweak it with transformations seem like it might be the way to go, generally speaking.
Thanks Brandon for the fabulous video!! I am trying to use log10 to transform data to normal distribution which is ready for One-Way repeated measure ANOVA. Why can't we transform the values less than 1? I understand that log10 will transform the values to negative, but it actually helped the data pass normality test and also detected a significant difference in ANOVA and post-hoc as well. I also watched other video saying that the assumptions for log10 transform include that original data have 1) no zero; 2) no negative values 3) are right skewness. I wonder which is more true. I would really appreciate if you give more explanation? Thank you so much!
Are there any negative consequences of only performing log transform on one variable in a dataset? Or log transform one, and sqrt root another? Can it ‘ruin’ prediction, or does it not matter as long as they are normally distributed? (For datasets with multiple variables)
Hi Brandon! Thank you so much for making such detailed playlist on statistics. I finished the entire statistics playlist and now I am more confident about statistics. Thank you so so much!!
Hi Brandon, I am not quite good at statistics before, but your lecture of linear regression, anova are so clear and easy to understand. I learn all these video before my Advanced Statistic course, it really helps me a lot, and finally I got 9/10 in AS which I never expected.
You are a blessing to mankind. I love you Sir.
Also, great series, I really enjoyed this one and following along in R Studio trying out what you're suggesting here. The visualizations that show how the data are extended or compressed to normalize it somewhat were quite helpful and gave the insight into why these particular functions are used. Very cool. Cheers!
Hey Brandon, thank you so much for the video. It was clear and concise and easy to understand.
Hey Julio, couldn't help but notice your beautiful profile pic. Fantastic design! 👏
Namaste Brandon Jee :-) thank you so much for all your incredible tutorials.
Hi
can you please let me know what formula you applied in B_T and LSTAT_t
Thanks, dude. Helpful explanation.
Such a good explanation. Thanks a lot
So when choosing predictors to include in a model, if one is using a technique like best subset, forward/backward stepwise selection, does it make more sense to do the transformations first, then perform the predictor selection, or pick the predictors first then look to see if transformations improve the model further. It seems like the latter to me as I notice that the predictor significance doesn't change much when a transformation is applied. In the Boston data, for example, it seems like age often has a large p-value in many model variants. It correlates pretty strongly with lstat, for example, and so maybe refining the model predictors first, then looking for opportunities to tweak it with transformations seem like it might be the way to go, generally speaking.
love the transformation hammer! 😅🔨🔨🔨
❤very clear!
Thanks Brandon for the fabulous video!! I am trying to use log10 to transform data to normal distribution which is ready for One-Way repeated measure ANOVA. Why can't we transform the values less than 1? I understand that log10 will transform the values to negative, but it actually helped the data pass normality test and also detected a significant difference in ANOVA and post-hoc as well. I also watched other video saying that the assumptions for log10 transform include that original data have 1) no zero; 2) no negative values 3) are right skewness. I wonder which is more true. I would really appreciate if you give more explanation? Thank you so much!
Love that. Hello and namaste!!
Are there any negative consequences of only performing log transform on one variable in a dataset? Or log transform one, and sqrt root another? Can it ‘ruin’ prediction, or does it not matter as long as they are normally distributed? (For datasets with multiple variables)
Thank you so much. This is very helpful/
good video
@Brandon Foltz hi Brandon , as always I enjoying your series … may I ask you to make series About SEM and Path Analysis , thanks a lot
nice
Smashing
Namastey