Quantile Regression - EXPLAINED!

Поділитися
Вставка
  • Опубліковано 11 гру 2024

КОМЕНТАРІ • 36

  • @rishisharma8311
    @rishisharma8311 3 роки тому +11

    Dude the concepts you teach are new and unheard off. I always get to learn something new watching your videos. Keep it coming

  • @bellahuang8522
    @bellahuang8522 10 місяців тому

    taking a machine leanring class in a policy school so you can imagine how bad my professor is when he was trying to explain this for 30 minutes in class. Your visuals give me very good intuition. TY!

  • @90benj
    @90benj 3 роки тому +17

    What would be extremely helpful for a new data scientist and machine learning enthusiast, would be a model zoo so to say, so a short summary of the most used models, what they are good at and what are their weaknesses and maybe a couple of advanced models which are based on the base models. Because often, I don't have any overview about what I am missing.

  • @90benj
    @90benj 3 роки тому +2

    This is awesome! Really good understandable, I will probably try myself at that Quantile Regressor NN, it sounds fun.

    • @CodeEmporium
      @CodeEmporium  3 роки тому +1

      Awesome! Lemme know how that shakes out. Fun stuff! And thanks

  • @PD-vt9fe
    @PD-vt9fe 3 роки тому +1

    Thank you for another awesome video. Didn't expect this soon though. Keep it up!

  • @shambhaviaggarwal9977
    @shambhaviaggarwal9977 Рік тому

    The explanation was pretty clear. Thanks!

  • @borisn.1346
    @borisn.1346 3 роки тому +1

    You could compute lower and upper bound with good ol' OLS regression as well.

  • @mansikumari4954
    @mansikumari4954 Місяць тому +1

    This was reallyyy good!!!!!

  • @brokecoder
    @brokecoder 2 роки тому +1

    Hmm.. I used to use bootstraping to get the percentile bounds, so that I can derive confidence intervals. But this seems like another apporach.

  • @kabeerjaffri4015
    @kabeerjaffri4015 3 роки тому +1

    Great video as usual!!

  • @swatisingh4041
    @swatisingh4041 2 роки тому +2

    Hii! Very infomative video. Can you pls share how to apply quantile regression when there are more than one independent feature (X1, X2, X3.....) tHANKS

  • @eliaskonig2526
    @eliaskonig2526 Рік тому

    Thanks for the Video!
    Is it also possible to use it in combination with dummy-variables?

  • @remimoise8908
    @remimoise8908 3 роки тому +2

    Great video, thanks!
    Regarding the neural network which can return 3 values at once (low, median and high), beside adapting the loss function, how would you label the 3 values for each data point? Since we only have one label per point, would you duplicate that label?

  • @cientivic7341
    @cientivic7341 2 роки тому +1

    I'm wondering, shouldn't the output of the model form a line instead of scattered points?
    Like... what the model does is basically identify each quantile and use it as a prediction without any type of smoothing (thus, it would become a line in the graph)?

    • @andytucker9991
      @andytucker9991 Рік тому

      Hi, i also had the same question, but i think the reason is because he used Light GBM regressor instead of OLS to get the predicted values. The predictions given by a LightGBM model does not fall on a straight line, i.e the predictions are non linear, unlike OLS regressor.

  • @ziangxu7751
    @ziangxu7751 3 роки тому

    Great interpretation! Thank you!

  • @charan7233
    @charan7233 2 роки тому

    Amazing! Thank you

  • @НикитаБуров-ъ6р
    @НикитаБуров-ъ6р 11 місяців тому

    nice explaination, thks

  • @vladislavlevitin
    @vladislavlevitin 3 роки тому

    Excellent video!

  • @emiya_muljomdaoo
    @emiya_muljomdaoo 3 роки тому

    well explained, thank you for the video!

  • @patrickduhirwenzivugira4729
    @patrickduhirwenzivugira4729 2 роки тому

    Thank you for the video. I have a question. You have fit the LGBMRegressor with default hyperparameter values. How would one tune these hyperparameters and which metric can be used to get the best models?

  • @shnibbydwhale
    @shnibbydwhale 3 роки тому +2

    Great video, but I am a little confused. How is using quantile regression fundamentally different than using linear regression and giving both the predicted value from the linear regression model + point prediction intervals for each prediction?

    • @zxynj
      @zxynj 2 роки тому

      I think the traditional method requires normality of residuals to estimate the prediction interval while quantile regression lets the model learn the quantile prediction through a specific loss function (l1 loss, but gives more panelty for wrong direction percentile). Therefore, quantile regression does not require linear regression assumptions (at least the normality of residuals part). This is just my understanding of the concept.

    • @snehanshusaha890
      @snehanshusaha890 Рік тому

      @@zxynj , yes. On top of that, quantiles offer intervals of confidence in predictions which means based predictions don't.

    • @mansoocho2351
      @mansoocho2351 10 місяців тому

      @@zxynj Great explanation! I would like to add a small example - I think quantile regression would make a great use case for skewed distributions since it does not require the data to be normally distributed.

  • @Im-Assmaa
    @Im-Assmaa 2 роки тому

    Thank you for the video. I have a question. How can I compute the quantiles for a specific p, using Rankit-cleveland method? It is used to estimate the value at risk using quantile regression and I am kind of stuck. please help

  • @TK-mv6sq
    @TK-mv6sq 2 роки тому

    Thank you!

  • @jayjhaveri1906
    @jayjhaveri1906 8 місяців тому

    love you

  • @johannaw2031
    @johannaw2031 Рік тому +2

    Im sorry, but the math behind it is still a riddle. Did you say that: If we estimate the 10th percentile and the observed value is higher than the predicted value, then we want to penalize that? So then we take 0,9*|residual|. But if we estimate the 10th percentile and the observed value is lower than the predicted value then this is more "expected" and thus we only penalize it by 0,1*|residual|.

  • @amarpreet3519
    @amarpreet3519 Рік тому

    not a good explanation