taking a machine leanring class in a policy school so you can imagine how bad my professor is when he was trying to explain this for 30 minutes in class. Your visuals give me very good intuition. TY!
What would be extremely helpful for a new data scientist and machine learning enthusiast, would be a model zoo so to say, so a short summary of the most used models, what they are good at and what are their weaknesses and maybe a couple of advanced models which are based on the base models. Because often, I don't have any overview about what I am missing.
Hii! Very infomative video. Can you pls share how to apply quantile regression when there are more than one independent feature (X1, X2, X3.....) tHANKS
Great video, thanks! Regarding the neural network which can return 3 values at once (low, median and high), beside adapting the loss function, how would you label the 3 values for each data point? Since we only have one label per point, would you duplicate that label?
I'm wondering, shouldn't the output of the model form a line instead of scattered points? Like... what the model does is basically identify each quantile and use it as a prediction without any type of smoothing (thus, it would become a line in the graph)?
Hi, i also had the same question, but i think the reason is because he used Light GBM regressor instead of OLS to get the predicted values. The predictions given by a LightGBM model does not fall on a straight line, i.e the predictions are non linear, unlike OLS regressor.
Thank you for the video. I have a question. You have fit the LGBMRegressor with default hyperparameter values. How would one tune these hyperparameters and which metric can be used to get the best models?
Great video, but I am a little confused. How is using quantile regression fundamentally different than using linear regression and giving both the predicted value from the linear regression model + point prediction intervals for each prediction?
I think the traditional method requires normality of residuals to estimate the prediction interval while quantile regression lets the model learn the quantile prediction through a specific loss function (l1 loss, but gives more panelty for wrong direction percentile). Therefore, quantile regression does not require linear regression assumptions (at least the normality of residuals part). This is just my understanding of the concept.
@@zxynj Great explanation! I would like to add a small example - I think quantile regression would make a great use case for skewed distributions since it does not require the data to be normally distributed.
Thank you for the video. I have a question. How can I compute the quantiles for a specific p, using Rankit-cleveland method? It is used to estimate the value at risk using quantile regression and I am kind of stuck. please help
Im sorry, but the math behind it is still a riddle. Did you say that: If we estimate the 10th percentile and the observed value is higher than the predicted value, then we want to penalize that? So then we take 0,9*|residual|. But if we estimate the 10th percentile and the observed value is lower than the predicted value then this is more "expected" and thus we only penalize it by 0,1*|residual|.
Dude the concepts you teach are new and unheard off. I always get to learn something new watching your videos. Keep it coming
taking a machine leanring class in a policy school so you can imagine how bad my professor is when he was trying to explain this for 30 minutes in class. Your visuals give me very good intuition. TY!
What would be extremely helpful for a new data scientist and machine learning enthusiast, would be a model zoo so to say, so a short summary of the most used models, what they are good at and what are their weaknesses and maybe a couple of advanced models which are based on the base models. Because often, I don't have any overview about what I am missing.
This is awesome! Really good understandable, I will probably try myself at that Quantile Regressor NN, it sounds fun.
Awesome! Lemme know how that shakes out. Fun stuff! And thanks
Thank you for another awesome video. Didn't expect this soon though. Keep it up!
The explanation was pretty clear. Thanks!
You could compute lower and upper bound with good ol' OLS regression as well.
This was reallyyy good!!!!!
Thanks so much for watching
Hmm.. I used to use bootstraping to get the percentile bounds, so that I can derive confidence intervals. But this seems like another apporach.
Great video as usual!!
Hii! Very infomative video. Can you pls share how to apply quantile regression when there are more than one independent feature (X1, X2, X3.....) tHANKS
Thanks for the Video!
Is it also possible to use it in combination with dummy-variables?
Great video, thanks!
Regarding the neural network which can return 3 values at once (low, median and high), beside adapting the loss function, how would you label the 3 values for each data point? Since we only have one label per point, would you duplicate that label?
I'm wondering, shouldn't the output of the model form a line instead of scattered points?
Like... what the model does is basically identify each quantile and use it as a prediction without any type of smoothing (thus, it would become a line in the graph)?
Hi, i also had the same question, but i think the reason is because he used Light GBM regressor instead of OLS to get the predicted values. The predictions given by a LightGBM model does not fall on a straight line, i.e the predictions are non linear, unlike OLS regressor.
Great interpretation! Thank you!
Thanks a ton for watching :)
Amazing! Thank you
Welcome:)
nice explaination, thks
Excellent video!
Thank youu!
well explained, thank you for the video!
Thank you for the video. I have a question. You have fit the LGBMRegressor with default hyperparameter values. How would one tune these hyperparameters and which metric can be used to get the best models?
Great video, but I am a little confused. How is using quantile regression fundamentally different than using linear regression and giving both the predicted value from the linear regression model + point prediction intervals for each prediction?
I think the traditional method requires normality of residuals to estimate the prediction interval while quantile regression lets the model learn the quantile prediction through a specific loss function (l1 loss, but gives more panelty for wrong direction percentile). Therefore, quantile regression does not require linear regression assumptions (at least the normality of residuals part). This is just my understanding of the concept.
@@zxynj , yes. On top of that, quantiles offer intervals of confidence in predictions which means based predictions don't.
@@zxynj Great explanation! I would like to add a small example - I think quantile regression would make a great use case for skewed distributions since it does not require the data to be normally distributed.
Thank you for the video. I have a question. How can I compute the quantiles for a specific p, using Rankit-cleveland method? It is used to estimate the value at risk using quantile regression and I am kind of stuck. please help
Thank you!
Welcome!
love you
Im sorry, but the math behind it is still a riddle. Did you say that: If we estimate the 10th percentile and the observed value is higher than the predicted value, then we want to penalize that? So then we take 0,9*|residual|. But if we estimate the 10th percentile and the observed value is lower than the predicted value then this is more "expected" and thus we only penalize it by 0,1*|residual|.
not a good explanation