Trevor Hastie - Gradient Boosting Machine Learning

Поділитися
Вставка
  • Опубліковано 13 січ 2025

КОМЕНТАРІ • 23

  • @rsachs
    @rsachs 9 років тому +58

    Why did video editor put the inset video right in the middle over the last line of the slides? Should have put video so that it would not block the slides, like in in the corner.

  • @jasmeetsasan902
    @jasmeetsasan902 10 років тому +8

    BIg Fan of Prof. Hastie! Genius Statistician indee!

  • @newbie8051
    @newbie8051 Рік тому

    Great lecture, got to revise the basics
    Thanks prof 🙌

  • @alkodjdjd
    @alkodjdjd 5 років тому

    This guy is awesome: he knows his stuff quite well and can explain it great!

  • @hpent9940
    @hpent9940 8 років тому +4

    it's funny how boosting applies to real life---like cleaning my basement of 20 year accumulation of stuff and having emotional parameters attached--basically a divide and conquer method called boosting while one processes painful emotion to stuff predictors out of each diminished remainder pile of stuff. at the end, you'll say "phew. that was not as herculean as cleaning the augean stables."

  • @seguranca2009
    @seguranca2009 8 років тому +3

    Incredible talk.

  • @scomeron
    @scomeron 8 років тому +4

    great presentation

  • @MagicmathmandarinOrg
    @MagicmathmandarinOrg 6 років тому

    Excellent advice in Q & A. Thank you.

  • @KanishkaSharma05
    @KanishkaSharma05 7 років тому +21

    I know that profesor Trevor Hastie is good but if you are here for learning about Gradient Boosting then this is not the video for you. The lecture is only of 30mins and rest of the 15mins. is Q&As. It only gives a very brief overview of Gradient Boosting and also covers other algorithms which doesn't help the title.

    • @Sam-AZ
      @Sam-AZ 7 років тому

      Thanks

    • @UtkarshMishra1958
      @UtkarshMishra1958 6 років тому +2

      Any resource or youtube videos where i can learn GBM and XGBOOST ?

  • @emrahyigit
    @emrahyigit 7 років тому

    Very good talk! Congrats!

  • @apanapane
    @apanapane 8 років тому

    Amazing lecture. Thank you!

  • @yitzweb
    @yitzweb 6 років тому

    At 9:40, why is bias slightly increased for bagging just because the trees are shallower? If it were just a single tree then, yes bias would be increased with a shallow tree vs a deep tree. However, if the definition of bagging is to average the results of N shallow trees, then shouldn't the definition of bias also take into account that we are defining the model as using N trees???

    • @puneetrajput2994
      @puneetrajput2994 2 роки тому

      Shallow trees tend to miss out the overall tracking of the pattern thus a little increase in the biasness. Ensembling did reduced the biasness but the shallowness will have it's contribution too in increasing the biasness. Tradeoff.

  • @TheCsePower
    @TheCsePower Рік тому

    I didnt know he had a South African accent! Greetings from Africa.

  • @cashphattichaddi
    @cashphattichaddi 8 років тому +1

    Brilliant!! :-)

  • @kparag01
    @kparag01 5 років тому

    Legend 🙏

  • @adubey40
    @adubey40 7 років тому +5

    Bagging is poor man's Bayes .. haha ;)

  • @rostiposkis
    @rostiposkis Рік тому

    oh yeah

  • @EndlessMedical
    @EndlessMedical 2 місяці тому

    In my opinion no one actually understands how the boosting works, in depth, as I have never ever seen anyone explaining it like they owned it. Everybody is only repeating the same formulas, same rules, same blah blah.....

  • @唐旺-o9q
    @唐旺-o9q 7 років тому