Gradient Boost Part 3 (of 4): Classification

Поділитися
Вставка
  • Опубліковано 16 чер 2024
  • This is Part 3 in our series on Gradient Boost. At long last, we are showing how it can be used for classification. This video gives focuses on the main ideas behind this technique. The next video in this series will focus more on the math and how it works with the underlying algorithm.
    This StatQuest assumes that you have already watched Part 1:
    • Gradient Boost Part 1 ...
    ...and it also assumed that you understand Logistic Regression pretty well. Here are the links for...
    A general overview of Logistic Regression: • StatQuest: Logistic Re...
    how to interpret the coefficients: • Logistic Regression De...
    and how to estimate the coefficients: • Logistic Regression De...
    Lastly, if you want to learn more about using different probability thresholds for classification, check out the StatQuest on ROC and AUC: • THIS VIDEO HAS BEEN UP...
    For a complete index of all the StatQuest videos, check out:
    statquest.org/video-index/
    This StatQuest is based on the following sources:
    A 1999 manuscript by Jerome Friedman that introduced Stochastic Gradient Boost: statweb.stanford.edu/~jhf/ftp...
    The Wikipedia article on Gradient Boosting: en.wikipedia.org/wiki/Gradien...
    The scikit-learn implementation of Gradient Boosting: scikit-learn.org/stable/modul...
    If you'd like to support StatQuest, please consider...
    Buying The StatQuest Illustrated Guide to Machine Learning!!!
    PDF - statquest.gumroad.com/l/wvtmc
    Paperback - www.amazon.com/dp/B09ZCKR4H6
    Kindle eBook - www.amazon.com/dp/B09ZG79HXC
    Patreon: / statquest
    ...or...
    UA-cam Membership: / @statquest
    ...a cool StatQuest t-shirt or sweatshirt:
    shop.spreadshirt.com/statques...
    ...buying one or two of my songs (or go large and get a whole album!)
    joshuastarmer.bandcamp.com/
    ...or just donating to StatQuest!
    www.paypal.me/statquest
    Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
    / joshuastarmer
    #statquest #gradientboost
  • Фільми й анімація

КОМЕНТАРІ • 517

  • @statquest
    @statquest  4 роки тому +26

    NOTE: Gradient Boost traditionally uses Regression Trees. If you don't already know about Regression Trees, check out the 'Quest: ua-cam.com/video/g9c66TUylZ4/v-deo.html Also NOTE: In Statistics, Machine Learning and almost all programming languages, the default base for the log function, log(), is log base 'e' and that is what I use here.
    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

    • @parijatkumar6866
      @parijatkumar6866 3 роки тому

      I am a bit confused. The first Log that you took : Log(4/2) - was that to some base other than e? Cause e^(log(x)) = x for log to the base e
      And hence the probability will be simply 2/(1+2) = 2/3 = No of Yes / Total Obs = 4/6 = 2/3
      Pls let me know if this is correct.

    • @statquest
      @statquest  3 роки тому +2

      @@parijatkumar6866 The log is to the base 'e', and yes, e^(log(x)) = x. However, sometimes we don't have x, we just have the log(x), as is illustrated at 9:45. So, rather than use one formula at one point in the video, and another in another part of the video, I believe I can do a better job explaining the concepts if I am consistent.

    • @jonelleyu1895
      @jonelleyu1895 Рік тому

      For Gradient Boost for CLASSIFICATION, because we convert the categorical targets(No or Yes) to probabilities(0-1) and the residuals are calculated from the probabilities, when we build a tree, we still use REGRESSION tree, which use sum of squared residuals to split the tree. Is it correct? Thank you.

    • @statquest
      @statquest  Рік тому +1

      @@jonelleyu1895 Yes, even for classification, the target variable is continuous (probabilities instead of Yes/No), and thus, we use regression trees.

  • @weiyang2116
    @weiyang2116 3 роки тому +159

    I cannot imagine the amount of time and effort used to create these videos. Thanks!

    • @statquest
      @statquest  3 роки тому +27

      Thank you! Yes, I spent a long time working on these videos.

  • @sameepshah3835
    @sameepshah3835 6 днів тому +1

    Thank you so much Josh, I watch 2-3 videos everyday of your machine learning playlist and it just makes my day. Also the fact that you reply to most of the people in the comments section is amazing. Hats off. I only wish the best for you genuinely.

  • @primozpogacar4521
    @primozpogacar4521 3 роки тому +22

    Love these videos! You deserve a Nobel prize for simplifying machine learning explanations!

  • @dhruvjain4774
    @dhruvjain4774 4 роки тому +10

    you really explain complicated things in very easy and catchy way.
    i like the way you BAM

  • @xinjietang953
    @xinjietang953 9 місяців тому +2

    Thanks for all you've done. You know your videos is first-class and precision-promised learning source for me.

  • @jagunaiesec
    @jagunaiesec 4 роки тому +34

    The best explanation I've seen so far. BAM! Catchy style as well ;)

    • @statquest
      @statquest  4 роки тому

      Thank you! :)

    • @arunavsaikia2678
      @arunavsaikia2678 4 роки тому +1

      @@statquest are the individual trees which are trying to predict the residuals regression trees?

    • @statquest
      @statquest  4 роки тому

      @@arunavsaikia2678 Yes, they are regression trees.

  • @Valis67
    @Valis67 2 роки тому +2

    That's an excellent lesson and a unique sense of humor. Thank you a lot for the effort in producing these videos!

  • @debsicus
    @debsicus 3 роки тому +52

    This content shouldn’t be free Josh. So amazing Thank You 👏🏽

    • @statquest
      @statquest  3 роки тому +2

      Thank you very much! :)

  • @igormishurov1876
    @igormishurov1876 4 роки тому +7

    Will recommend the channel for everyone study the machine learning :) Thanks a lot, Josh!

  • @OgreKev
    @OgreKev 4 роки тому +52

    I'm enjoying the thorough and simplified explanations as well as the embellishments, but I've had to set the speed to 125% or 150% so my ADD brain can follow along.
    Same enjoyment, but higher bpm (bams per minute)

  • @asdf-dh8ft
    @asdf-dh8ft 3 роки тому +2

    Thank you very much! Your step by step explanation is very helpful. It gives to people with poor abstract thinking like me chance to understand all math of these algorithms.

    • @statquest
      @statquest  3 роки тому +1

      Glad it was helpful!

  • @juliocardenas4485
    @juliocardenas4485 Рік тому +1

    Yet again. Thank you for making concepts understandable and applicable

  • @tymothylim6550
    @tymothylim6550 3 роки тому +2

    Thank you Josh for another exciting video! It was very helpful, especially with the step-by-step explanations!

    • @statquest
      @statquest  3 роки тому +1

      Hooray! I'm glad you appreciate my technique.

  • @cmfrtblynmb02
    @cmfrtblynmb02 2 роки тому +1

    Finally a video that shows the process of gradent boosting. Thanks a lot.

  • @soujanyapm9595
    @soujanyapm9595 3 роки тому +1

    Amazing illustration of a complicated concept. This is best explanation. Thank you so much for all your efforts in making us understand the concepts very well !!! Mega BAM !!

  • @lemauhieu3037
    @lemauhieu3037 2 роки тому +2

    I'm new to ML and these contents are gold. Thank you so much for the effort!

  • @umeshjoshi5059
    @umeshjoshi5059 4 роки тому +2

    Love these videos. Starting to understand the concepts. Thank you Josh.

  • @user-gr1qk3gu4j
    @user-gr1qk3gu4j 5 років тому +1

    Very simple and practical lesson. I did created a worked sample based on this with no problems.
    It might be obvious, but not explained there, that initial mean odd should be more than 1. It might be explained as odd of more rare event should be closer to zero.
    Glad to see this video arrived just at the time I started to interest this topic.
    I guess it will become a "bestseller"

  • @narasimhakamath7429
    @narasimhakamath7429 3 роки тому +4

    I wish I had a teacher like Josh! Josh, you are the best! BAAAM!

  • @rishabhkumar-qs3jb
    @rishabhkumar-qs3jb 3 роки тому +1

    Fantastic video , I was confused about the gradient boosting, after watching all parts of gb technique from this channel, I understood it very well :)

  • @gonzaloferreirovolpi1237
    @gonzaloferreirovolpi1237 5 років тому +1

    Already waiting for Part 4...thanks as always Josh!

    • @statquest
      @statquest  5 років тому +1

      I'm super excited about Part 4 and should be out in a week and a half. This week got a little busy with work, but I'm doing the best that I can.

  • @marjaw6913
    @marjaw6913 2 роки тому +1

    Thank you so much for this series, I understand everything thanks to you!

  • @amitv.bansal178
    @amitv.bansal178 2 роки тому +1

    Absolutely wonderful. You are are my guru and a true salute to you

  • @dankmemer9563
    @dankmemer9563 3 роки тому +2

    Thanks for the video! I’ve been going on a statquest marathon for my job and your videos have been really helpful. Also “they’re eating her...and then they’re going eat me!....OH MY GODDDDDDDDDDDDDDD!!!!!!”

  • @mayankamble2588
    @mayankamble2588 2 місяці тому +1

    This is amazing. This is the nth time I have come back to this video!

  • @rrrprogram8667
    @rrrprogram8667 5 років тому +2

    I have beeeeennnn waiting for this video..... Awesome job Joshh

  • @yulinliu850
    @yulinliu850 5 років тому +1

    Excellent as always! Thanks Josh!

  • @sidagarwal43
    @sidagarwal43 3 роки тому +1

    Amazing and Simple as always. Thank You

    • @statquest
      @statquest  3 роки тому

      Thank you very much! :)

  • @siyizheng8560
    @siyizheng8560 4 роки тому +2

    All your videos are super amazing!!!!

  • @SergioPolimante
    @SergioPolimante 2 роки тому +1

    man, you videos are just super good, really.

  • @tumul1474
    @tumul1474 5 років тому +2

    amazing as always !!

  • @ayahmamdouh8445
    @ayahmamdouh8445 2 роки тому +1

    Hi Josh, great video.
    Thank you so much for your great effort.

  • @Just-Tom
    @Just-Tom 3 роки тому +3

    I was wrong! All your songs are great!!!
    Quadruple BAM!

  • @AmelZulji
    @AmelZulji 5 років тому +1

    First of all thank you for such a great explanations. Great job!
    It would be great if you could make a video about the Seurat package, which very powerful tool for single cell RNA analysis.

  • @user-ut3sy6hy4p
    @user-ut3sy6hy4p 3 місяці тому +1

    thanks alot , ur videos helped me too much, plz keep going

  • @user-be1hp3xo1b
    @user-be1hp3xo1b Рік тому +1

    Great video! Thank you!

  • @ElderScrolls7
    @ElderScrolls7 4 роки тому +1

    Another great lecture by Josh Starmer.

    • @statquest
      @statquest  4 роки тому +1

      Hooray! :)

    • @ElderScrolls7
      @ElderScrolls7 4 роки тому +1

      @@statquest I actually have a draft paper (not submitted yet) and included you in the acknowledgements if that is ok with you. I will be very happy to send it to you when we have a version out.

    • @statquest
      @statquest  4 роки тому +1

      @@ElderScrolls7 Wow! that's awesome! Yes, please send it to me. You can do that by contacting me first through my website: statquest.org/contact/

    • @ElderScrolls7
      @ElderScrolls7 4 роки тому +1

      @@statquest I will!

  • @joeroc4622
    @joeroc4622 4 роки тому +1

    Thank you very much for sharing! :)

  • @abhilashsharma1992
    @abhilashsharma1992 4 роки тому +5

    Best original song ever in the start!

    • @statquest
      @statquest  4 роки тому +2

      Yes! This is a good one. :)

  • @abissiniaedu6011
    @abissiniaedu6011 Рік тому +1

    Your are very helpful, thank you!

  • @user-tk6bz6lw4e
    @user-tk6bz6lw4e 4 роки тому +1

    Thank you for good videos!

  • @abdelhadi6022
    @abdelhadi6022 5 років тому +1

    Thank you, awesome video

  • @yjj.7673
    @yjj.7673 4 роки тому +1

    This is great!!!

  • @timothygorden7689
    @timothygorden7689 Рік тому +1

    absolute gold

  • @abyss-kb8qy
    @abyss-kb8qy 4 роки тому +2

    God bless you , thanks you so so so much.

  • @rvstats_ES
    @rvstats_ES 4 роки тому +1

    Congrats!! Nice video! Ultra bam!!

    • @statquest
      @statquest  4 роки тому

      Thank you very much! :)

  • @CC-um5mh
    @CC-um5mh 5 років тому +1

    This is absolutely a great video. Will you cover why we can use residual/(p*(1-p)) as the log of odds in your next video? Very excited for the part 4!!

    • @statquest
      @statquest  5 років тому +1

      Yes! The derivation is pretty long - lots of little steps, but I'll work it out entirely in the next video. I'm really excited about it as well. It should be out in a little over a week.

  • @anusrivastava5373
    @anusrivastava5373 3 роки тому +1

    Simply Awesome!!!!!!

  • @parthsarthijoshi6301
    @parthsarthijoshi6301 3 роки тому +1

    THIS IS A BAMTABULOUS VIDEO !!!!!!

  • @user-qu7sh1kb1e
    @user-qu7sh1kb1e 4 роки тому +1

    very detailed and convincing

  • @Mars7822
    @Mars7822 Рік тому +1

    Super Cool to understand and study, Keep Up master..........

  • @suryan5934
    @suryan5934 3 роки тому +4

    Now I want to watch Troll 2

    • @statquest
      @statquest  3 роки тому +1

      :)

    • @AdityaSingh-lf7oe
      @AdityaSingh-lf7oe 3 роки тому +2

      Somewhere around the 15 min mark I made up my mind to search this movie on google

    • @suryan5934
      @suryan5934 3 роки тому

      @@AdityaSingh-lf7oe bam

  • @patrickyoung5257
    @patrickyoung5257 4 роки тому +5

    You save me from the abstractness of machine learning.

  • @rrrprogram8667
    @rrrprogram8667 5 років тому +2

    So finallyyyy the MEGAAAA BAMMMMM is included.... Awesomeee

    • @statquest
      @statquest  5 років тому +2

      Yes! I was hoping you would spot that! I did it just for you. :)

    • @rrrprogram8667
      @rrrprogram8667 5 років тому +1

      @@statquest i was in office when i first wrote the comment earlier so couldn't see the full video...

  • @vinayakgaikar154
    @vinayakgaikar154 Рік тому +1

    nice explanation and easy to understand thanks bro

  • @sid9426
    @sid9426 4 роки тому +1

    Hey Josh,
    I really enjoy your teaching. Please make some videos on XG Boost as well.

    • @statquest
      @statquest  4 роки тому

      XGBoost Part 1, Regression: ua-cam.com/video/OtD8wVaFm6E/v-deo.html
      Part 2 Classification: ua-cam.com/video/8b1JEDvenQU/v-deo.html
      Part 3 Details: ua-cam.com/video/ZVFeW798-2I/v-deo.html
      Part 4, Crazy Cool Optimizations: ua-cam.com/video/oRrKeUCEbq8/v-deo.html

  • @HamidNourashraf
    @HamidNourashraf 7 місяців тому +1

    the best video for GBT

  • @siddharthvm8262
    @siddharthvm8262 2 роки тому +1

    Bloody awesome 🔥

  • @61_shivangbhardwaj46
    @61_shivangbhardwaj46 3 роки тому +1

    You r amazing sir! 😊 Great content

  • @rohitbansal3032
    @rohitbansal3032 3 роки тому +1

    You are awesome !!

  • @vijaykumarlokhande1607
    @vijaykumarlokhande1607 2 роки тому

    I salute your hardwork, and mine too

  • @TheAbhiporwal
    @TheAbhiporwal 5 років тому +2

    Superb video without a doubt!!!
    one query Josh, do you have any plans to cover a video on "LightGBM" in near future?

  • @vans4lyf2013
    @vans4lyf2013 3 роки тому +7

    I wish I could give you the money that I pay in tuition to my university. It's ridiculous that people who are paid so much can't make the topic clear and comprehensible like you do. Maybe you should do teaching lessons for these people. Also you should have millions of subscribers!

  • @sandralydiametsaha9261
    @sandralydiametsaha9261 5 років тому +1

    thank you very much for your videos !
    when will you post the next one ?

  • @koderr100
    @koderr100 2 роки тому +1

    thanks for videos. best of anything else I did see. Will use this 'pe-pe-po-pi-po" as message alarm on phone)

  • @IQstrategy
    @IQstrategy 5 років тому +1

    Great videos again! XGBoost next? As this is supposed to solve both variance (RF) & bias (Boost) problems.

  • @rodrigomaldonado5280
    @rodrigomaldonado5280 5 років тому +4

    Hi Statquest would you please make a video about naive bayes? Please it would be really helpful

  • @nayrnauy249
    @nayrnauy249 2 роки тому +1

    Josh my hero!!!

  • @CodingoKim
    @CodingoKim Рік тому +3

    my life has been changed for 3 times. First, when I met Jesus. Second, when I found out my true live. Third, it's you Josh

  • @haitaowu5888
    @haitaowu5888 3 роки тому

    Hi, I have a few questions: 1. How do we know when GBDT algorithms stops( except the M, number of trees) 2. how do I choose value for the M, how do I know this is optimal ?
    Nice work by the way, best explanation I found on the internet.

    • @statquest
      @statquest  3 роки тому

      You can stop when the predictions stop improving very much. You can try different values for M and plot predictions after each tree and see when predictions stop improving.

    • @haitaowu5888
      @haitaowu5888 3 роки тому +1

      @@statquest thank you!

  • @pmanojkumar5260
    @pmanojkumar5260 4 роки тому +1

    Great ..

  • @jrgomez7340
    @jrgomez7340 5 років тому

    Very helpful explanation. Can you also add a video on how to do this in R? Thanks

  • @jongcheulkim7284
    @jongcheulkim7284 2 роки тому +1

    Thank you so much.

  • @aweqweqe1
    @aweqweqe1 2 роки тому +1

    Respect and many thanks from Russia, Moscow

  • @junaidbutt3000
    @junaidbutt3000 5 років тому +1

    Another superb video Josh. The example was very clear and I’m beginning to see the parallels between the regression and classification case.
    One key distinction seems to be in calculating the output value of the terminal nodes for the trees.
    In the regression case the average was taken of the values in the terminal nodes (although this can be changed based on the loss function selected). In the classification case it seems that a different method is used to calculate the output values at the terminal nodes but it seems a function of the loss function (presumably a loss function which takes into account a Bernoulli process?).
    Secondly we also have to be careful in converting the output of the tree ensemble to a probability score. The output is a log odds score and we have to convert it to a probability before we can calculate residuals and generate predictions.
    Is my understanding more or less correct here? Or have I missed something important? Thanks again!

    • @statquest
      @statquest  5 років тому +1

      You are correct! When Gradient Boost is used for Classification, some liberties are taken with the loss function that you don't see when Gradient Boost is used for Regression. The difference being that the math is super easy for Regression, but for Classification, there are not any easy "closed form" solutions. In theory, you could use Gradient Descent to find approximations, but that would be slow, so, in practice, people use an approximation based on the Taylor series. That's where that funky looking function used to calculate Output Values comes from. I'll cover that in Part 4.

  • @enkhbaatarpurevbat3116
    @enkhbaatarpurevbat3116 4 роки тому +1

    love it

  • @ulrichwake1656
    @ulrichwake1656 5 років тому +3

    Thank you so much. Great videos again and again.
    One question, what is the difference between xgboost and gradient boost?

    • @mrsamvs
      @mrsamvs 4 роки тому

      please reply @statQuest team

  • @sajjadabdulmalik4265
    @sajjadabdulmalik4265 4 роки тому

    Hi Josh thanks alot for your clearly explained videos. I had a question @12.17 when you make the second tree spliting the tree twice with Age only the node and the decision node both are Age. If this is correct will not be a continuous variable create kind of biasness? My second question when we classify the the new person @ 14.40 the initial log(odds) still remains 0.7? Assuming this is nothing but your test set however what happens in the real world scenario were we have more records does the log odds changes as per the new data we want to predict meaning the log of odds for train and test set depends on their own averages (the log of odds)?

  • @dhruvarora6927
    @dhruvarora6927 5 років тому

    Thank you for sharing this Josh. I have a quick question - the subsequent trees which are predicting residuals are regression trees (not classification tree) as we are predicting continuous values (residual probabilities)?

  • @anshvashisht8519
    @anshvashisht8519 10 місяців тому +1

    really liked this intro

  • @mengdayu6203
    @mengdayu6203 5 років тому +17

    How does the multi-classification algorithm work in this case? Using one vs rest method?

    • @bharathbhimshetty8926
      @bharathbhimshetty8926 4 роки тому +2

      It's been over 11 months and no reply from josh... bummer

    • @AnushaCM
      @AnushaCM 4 роки тому +2

      have the same question

    • @ketanshetye5029
      @ketanshetye5029 4 роки тому +1

      @@AnushaCM well, we could use one vs rest approach

    • @Andynath100
      @Andynath100 3 роки тому +3

      It uses a Softmax objective in the case of multi-class classification. Much like Logistic(Softmax) regression.

  • @pranaykothari9870
    @pranaykothari9870 5 років тому +2

    Can GB for classification be used for multiple classes? If yes, how will the math be, the video explains for binary classes.

  • @deepakmehta1813
    @deepakmehta1813 3 роки тому

    Fantastic song, Josh. I have started picturing that I am attending a class and the professor/lecturer walks by in the room with the guitar, and the greeting would be the song. This could be the new norm following stat quest. One question regarding gradient boost that I have is why it restricts the size of the tree based on the number of leaves. What would happen if that restriction is ignored? Thanks, Josh. Once again, superb video on this topic.

    • @statquest
      @statquest  3 роки тому

      If you build full sized trees then you would overfit the data and you would not be using "weak learners".

  • @sandipansarkar9211
    @sandipansarkar9211 2 роки тому +1

    finished watching

  • @rungrawin1994
    @rungrawin1994 2 роки тому +2

    Listening to your song makes me thinking of Phoebe Buffay haha.
    Love it, anyway !

    • @statquest
      @statquest  2 роки тому +1

      See: ua-cam.com/video/D0efHEJsfHo/v-deo.html

    • @rungrawin1994
      @rungrawin1994 2 роки тому +1

      ​@@statquest Smelly stat, smelly stat, It's not your fault (to be so hard to understand)

    • @rungrawin1994
      @rungrawin1994 2 роки тому

      @@statquest btw i like your explanation on gradient boost too

  • @amanbagrecha
    @amanbagrecha 3 роки тому

    Need to learn how to run powerpoint presentation lol. Amazing stuff

  • @sebastianlinaresrosas3278
    @sebastianlinaresrosas3278 5 років тому +2

    How do you create each tree? In your decision tree video you use them for classification, but here they are used to predict the residuals (something like regression trees)

  • @rrrprogram8667
    @rrrprogram8667 5 років тому

    Waiting for part 4

  • @lakshman587
    @lakshman587 3 роки тому +4

    16:25 My first *Mega Bam!!!*

  • @123chith
    @123chith 5 років тому +16

    Thank you so much can you please make a video for Support Vector Machines

  • @siddharth4251
    @siddharth4251 Рік тому +1

    subscribed sir....nice efforts sir

  • @shashiramreddy9896
    @shashiramreddy9896 3 роки тому

    @StatQuest Thanks for the great content you provide. It's a great explanation of binary-class classification, but how will all this explanation apply to multi-class classification?

    • @statquest
      @statquest  3 роки тому

      Usually people combine multiple models that test class vs everything else.

  • @hamzael2200
    @hamzael2200 3 роки тому

    HEY ! THANKS FOR THIS AWESOME VIDEO. I HAVE A QUESTION : IN THE 12:00 MIN HOW DID YOU BUILD THIS NEW TREE? WHAT WAS THE CRITERIA FOR CHOOSING AGE LESS THAN 66 AS THE ROOT ?

    • @statquest
      @statquest  3 роки тому

      Gradient Boost uses Regression Trees: ua-cam.com/video/g9c66TUylZ4/v-deo.html

  • @JoaoVictor-sw9go
    @JoaoVictor-sw9go 2 роки тому +2

    Hi Josh, great video as always! Can you explain to me or recommend a material to understand the GB algorithm when we are using it for a non-binary classification? E.g. we have three or more possible outputs for classification.

    • @statquest
      @statquest  2 роки тому +1

      Unfortunately I don't know a lot about that topic. :(

  • @TechBoy1
    @TechBoy1 9 місяців тому +1

    The legendary MEGA BAM!!

    • @statquest
      @statquest  9 місяців тому

      Ha! Thank you! :)

  • @cmfrtblynmb02
    @cmfrtblynmb02 2 роки тому +2

    How do you create the classification trees using residual probabilities? Do you stop using some kind of purity index during the optimization in that case? Or do you use regression methods?

    • @statquest
      @statquest  2 роки тому

      We use regression trees, which are explained here: ua-cam.com/video/g9c66TUylZ4/v-deo.html

  • @jwc7663
    @jwc7663 4 роки тому

    Thanks for the great video! One question: Why do you use 1-sigmoid instead of sigmoid itself?

    • @statquest
      @statquest  4 роки тому

      What time point in the video are you asking about?

  • @jayyang7716
    @jayyang7716 2 роки тому +1

    Thanks so much for the amazing videos as always! One question: why the loss function for Gradient Boost classification uses residual instead of cross entropy? Thanks!

    • @statquest
      @statquest  2 роки тому

      Because we only have two different classifications. If we had more, we could use soft max to convert the predictions to probabilities and then use cross entropy for the loss.

    • @jayyang7716
      @jayyang7716 2 роки тому

      @@statquest Thank you!

  • @user-ll8dr9bm5v
    @user-ll8dr9bm5v 6 місяців тому

    @statquest you mentioned at 10:45 that we build a lot of trees. Are you trying to refer to bagging or having different tree at each iteration?

    • @statquest
      @statquest  5 місяців тому

      Each time we build a new tree.