Machine Learning Lecture 19 "Bias Variance Decomposition" -Cornell CS4780 SP17

Поділитися
Вставка
  • Опубліковано 14 жов 2024
  • Lecture Notes:
    www.cs.cornell....

КОМЕНТАРІ • 79

  • @thecactus7950
    @thecactus7950 5 років тому +92

    Man, its such a privilege being able to watch stuff like this.

    • @Biesterable
      @Biesterable 5 років тому +8

      So true

    • @TrentTube
      @TrentTube 5 років тому +3

      I feel the exact same way. I am constantly humbled and thrilled this is available.

  • @filippovannella4957
    @filippovannella4957 5 років тому +43

    This man is one of the best professor I have ever seen. Thanks a lot for this lecture series.

  • @tarunluthrabk
    @tarunluthrabk 3 роки тому +11

    I searched extensively for good content on Machine learning, and by God's grace I found one! Thank you Prof Weinberger.

  • @juliocardenas4485
    @juliocardenas4485 3 роки тому +5

    I’m using what I’ve learned here to try improving people’s lives. I’m a data scientist in healthcare and a former radiology researcher.
    Thank you for sharing this freely.

  • @ebiiseo
    @ebiiseo 5 років тому +8

    Your ability to uncover insights behind all those mathematical formulas is superb. I really like the way you teach. Thank you for uploading this

  • @xwcao1991
    @xwcao1991 3 роки тому +3

    Thank you prof. Weinberger for making educational fairness to the people from the thrid world countries like me. who can not afford to study in one of the world's class universities like Cornell. Wish you healthy and happy in your entrie life.

  • @jorgeestebanmendozaortiz873
    @jorgeestebanmendozaortiz873 3 роки тому +2

    Due to the Covid crisis the professors at my University went on a strike for most of the semester, so my ML class got ruined. Fortunately I found your lectures, and I've been following through the last months. I have to say this is the most thorough introductory course to ML that I've found out there. Thank you very much, prof. Killian, for making your lectures available for everyone. You're working towards a freer and better world by doing so.

  • @jachawkvr
    @jachawkvr 4 роки тому +7

    I was familiar with these concepts before watching this lecture, but now I feel like I actually understand what bias and variance mean. Thank you so much for explaining these so well!

  • @yuniyunhaf5767
    @yuniyunhaf5767 5 років тому +5

    i cant believe i have reached to this point, and he shaped the way i think about ML, best professor

  • @deltasun
    @deltasun 4 роки тому +3

    that's the clearest exposition of bias-variance decomposition I've ever seen (and i've seen quite a few). by far

  • @MohamedTarek-vt4lb
    @MohamedTarek-vt4lb 9 місяців тому +1

    This is Amazing! bless you professor Kilian if you read this

  • @kevinshen3221
    @kevinshen3221 3 роки тому +1

    this is absolutely gold. was so confused by reading An Introduction to Statistical Learning because they give no explanation of how they get bias variance tradeoff, and i found this!

  • @psfonseka
    @psfonseka 5 років тому +4

    This was super helpful for my own classwork. Thank you so much for posting your lectures publicly!

  • @jenishah9825
    @jenishah9825 2 роки тому

    Such videos don't generally come up in YT suggestions. But if you have found it, it is a gold mine!

  • @rajeshs2840
    @rajeshs2840 5 років тому +5

    Oh Man, Hats off to you efforts.. Its amazing lecture..

  • @vatsan16
    @vatsan16 4 роки тому +34

    Me: Machine learning is a black box, the math is too abstract, and nothing really makes sense
    Professor Weinberger: Hold my beer

  • @muratcan__22
    @muratcan__22 5 років тому +13

    this video is gold

  • @vishnuvardhan6625
    @vishnuvardhan6625 7 місяців тому

    Best vedio on Bias-Variance Decomposition ❤

  • @crystinaxinyuzhang3621
    @crystinaxinyuzhang3621 4 роки тому +4

    It's such an amazing lecture! I've never thought each trained trained ml model itself as a random variable before and this is really eye opening

  • @sans8119
    @sans8119 4 роки тому +3

    An amazing lecture !! makes things very clear .

  • @angelocortez5185
    @angelocortez5185 2 роки тому

    These videos popped up on my feed. Didn't realize you wrote the MLKR paper as well. Seeing your videos make me wish I took a formal class with you. Thank you for this content Kilian!

  • @haodongzheng7045
    @haodongzheng7045 3 роки тому +1

    Thank you, professor. I feel like that I’ve grown up a little bit after watching your video ;)

  • @sheikhshafayat6984
    @sheikhshafayat6984 3 роки тому

    I don’t usually comment anywhere. But I can’t help say a thanks to you. Such a great teaching skill!

  • @TheAIJokes
    @TheAIJokes 3 роки тому

    you are one of my favourite teacher sir....love you from india....❤️

  • @NO_REPLY_ALARM_TOWARD_ME
    @NO_REPLY_ALARM_TOWARD_ME 2 роки тому

    I think that the lecturer always give the students several minutes to make clarified themselves even it maybe seems to be trivial in proving step. It maybe seem difficult, but concise to follow. Thanks.

  • @vishchugh
    @vishchugh 4 роки тому +1

    BEST LECTURE ON BIAS VARIANCE TR !!!!!!!!!!!!!!!!

  • @taketaxisky
    @taketaxisky 4 роки тому +2

    The way the error is decomposed reminds me of the decomposition of sum of squares in ANOVA into within group SS and between group SS, in a similar calculation

  • @mateuszjaworski2974
    @mateuszjaworski2974 10 місяців тому

    It's like a good action movie, you can't wait enought about what will be next.

  • @hanseyye1468
    @hanseyye1468 3 роки тому +2

    Thanks Professor Weinberger.I have one question on 23:28, why we use here the joint distribution p(x,y) but not a conditional p(yIx) or p(y)*p(x)

    • @kilianweinberger698
      @kilianweinberger698  3 роки тому

      Because you are drawing x and y randomly, and your data set and algorithm depends on both. You could factor this into first drawing x, then y i.e. P(y|x)P(x), but it really wouldn't change much in the analysis. Hope this helps.

    • @hanseyye1468
      @hanseyye1468 3 роки тому

      @@kilianweinberger698 thank you so much

  • @StevenSarasin
    @StevenSarasin 10 місяців тому

    That means that the noise also depends on the feature set. So that the noise is not necessarily irreducible, if you can find new features to include. In the housing price example you would appear to have a lot of noise if you left out a location variable in the feature x! Interesting. So we have reduced the generalization error to the dependency on D, the variance, will more data improve the situation? the dependency on the feature set, does there exist a feature set that limits the variance on y itself averaged given x? and there is the bias dependency, are we in principle flexible enough to match the true data pattern (linear vs non-linear.)

  • @danielsiemmeister5286
    @danielsiemmeister5286 3 роки тому

    First of all thank you of this very intuitive explanation, Mr. Weinberger!
    I have some small questions or remarks which aren't 100 % clear to me:
    - you said, that y (given x) is random. So we want to pick one statistic depending on our goals. In this case you choose the Expectation(y|x). (One could for example choose the median, coudn't we?) However some minutes later you choose the squared loss function as a "nice" choice for regression. Aren't this two sides of the same coin? If I am choosing the squared loss function, then I am picking the E[y|x]? (When I am choosing the absolute value loss function, then I am choosing the median). So this is my first question, are my thoughts right?
    - How would the proof look like if I am not in the "squared loss / expectation" setting? What would the proof look like for an generic loss function or statistic of y|x? This is my second question.
    - How would the proof look like if we are going in the regression setting? I think that is pretty much the same question as question 2. Am I right, when I am saying, that if the distribution of y|x is discrete, than I am in a classification setting and if it is continous, than I am in a regression setting? Furthermore, if I am picking the statistic of y|x (or a loss function) in a generic way, then I have a proof for classification and regression problems?
    I would be very thankful if anyone could answer or comment on my questions!
    Yours Daniel

    • @kilianweinberger698
      @kilianweinberger698  3 роки тому +1

      Yes, you are right. The math becomes a lot trickier if you don't use the squared loss, but ultimately the principle is the same for pretty much any less function.

  • @xmtiaz
    @xmtiaz 3 роки тому +1

    This was beautiful.

  • @ashraf736
    @ashraf736 Рік тому

    What a wonderful lecture.

  • @noblessetech
    @noblessetech 4 роки тому +1

    Awesome video playlist love it.

  • @abhyudayasrinet17
    @abhyudayasrinet17 5 років тому +1

    A really great explanation

  • @vocabularybytesbypriyankgo1558
    @vocabularybytesbypriyankgo1558 17 днів тому

    Thank you so much Sir !!!

  • @florianwicher
    @florianwicher 3 роки тому +1

    It was a little bit slow, but I got it now. Thanks a lot!

  • @roniswar
    @roniswar 3 роки тому

    Dear Prof', Thank you again for posting this, very useful and interesting!! One question: in a regression setup, why do you call h (the hypothesis function) as "expected classifier"? Is this the common definition, when thinking about regression problem? Thanks!

    • @kilianweinberger698
      @kilianweinberger698  3 роки тому +2

      No, it is only in the setting where you consider the training set as a random variable. Under this view, the classifier also becomes a random variable (as it is a function of the training set), and you can in theory compute its expectation. Hope this helps.

    • @roniswar
      @roniswar 3 роки тому

      @@kilianweinberger698 Thank you! One other thing that I didn't see that anyone asked. What happens to the The bias-variance tradeoff, that you fully showed for MSE, when the loss function is not an MSE? Is the decomposition still contains exactly those 3 quantities of bias-variance-noise? How do we measure the tradeoff in these case? We do not longer have this convex parabola shape I assume (If you have a good source explaining this issue please refer me to it).

  • @jordankuzmanovik5297
    @jordankuzmanovik5297 4 роки тому +1

    Wonderful!!...Bravo

  • @janismednieks1277
    @janismednieks1277 3 роки тому

    "My son is doing that now, he's in second grade."
    If you're the one teaching him, I believe you. Thanks.

  • @immabreakaleg
    @immabreakaleg 4 роки тому +2

    17:48 what a boss question wow

  • @pendekantimaheshbabu9799
    @pendekantimaheshbabu9799 4 роки тому

    Excellent Can we apply bv trade off among different models?ie for e.g. between Linear Regression and polynomial regression comparison?Whether Bold H consists of set of Hypotheses that contain only linear regressors?

    • @kilianweinberger698
      @kilianweinberger698  4 роки тому +1

      Ultimately the BV trade-off exists for all models. However, as far as I know the derivation of this decomposition only falls into place so nicely in a few steps for linear regression.

  • @ayushmalik7093
    @ayushmalik7093 2 роки тому

    Hi Prof
    High Variance implies overfitting but overfitting has 2 parts, high test error and low training error. how to imply low training error from high variance? High variance in the hd(x) could also be result of gibberish learning by our algo which could leads to high test and training error. IMO low bias and high variance should mean overfitting as in that case model prediction for different datasets will spread around the centre of your dart board.

  • @macc7374
    @macc7374 3 роки тому

    Hi Professor! Thank you for uploading this video. When we start the derivation by representing expected test error in terms of hD(X) and y, how can we explain the presence of noise? Our assumption is that y is the correct label. So while there is certainly noise in real world examples, given the starting point of the derivation here, should noise be expected to show up?

    • @kilianweinberger698
      @kilianweinberger698  3 роки тому +4

      Keep in mind noise can either be a bad measurement, but it could also be part of the label that you just cannot explain by your representation of x. Imagine I am predicting house prices (y) based on features about a house (x). My features are e.g. number of bedrooms, square footage, age, ... But now the price of a house decreases because a really loud and rambunctious fraternity moves in next door - something that is not captured in my x at all. For this house the price y is now abnormally low. The price is correct, but given your limited features the only way you can explain it is as noise.

    • @macc7374
      @macc7374 3 роки тому

      @@kilianweinberger698 thank you

  • @ammarkhan2611
    @ammarkhan2611 4 роки тому

    Hi Professor,
    Is there a way to get an access to the assignments ?

  • @amit_muses
    @amit_muses 4 роки тому

    I have a command on Bayesian probability theorem, total probability theorem, but couldn't understand the symbols the prof used. I could understand that the prof used some concepts of Expectation theory but couldn't understand well. Can someone suggest some material for this part that I can do in a very short period so that I can understand this lecture well

  • @gauravsinghtanwar4415
    @gauravsinghtanwar4415 4 роки тому

    What is the need to take probability term in the expected test error expression ?

  • @siddhanttandon6246
    @siddhanttandon6246 2 роки тому

    Hey Prof i have a question. In this derivation we kinda bounded the risk for a new sample i.e. out of sample risk which is composed of 3 parts. Is their some theory which does the same breakdown of risk on our training set i.e. samples the model has already seen ? I am particularly interested to know if my training loss can ever go to zero.

    • @kilianweinberger698
      @kilianweinberger698  2 роки тому +1

      That depends on your hypothesis class (i.e. what algorithm you are using). Maybe take a look at the lectures on Boosting. AdaBoost is an ensemble algorithm that (given some assumptions) guarantees that the training error will go to zero (if you average several classifiers together).

  • @adiratna96
    @adiratna96 3 роки тому +1

    I didn't understand why D and (x,y ) are independent. Anyone can explain why? please. TIA.

    • @adiratna96
      @adiratna96 3 роки тому +1

      Damn, never mind I got it.

  • @sandeshhegde9143
    @sandeshhegde9143 5 років тому +2

    Where is lecture 18? (I don't see it in the playlist)

    • @Saganist420
      @Saganist420 5 років тому +5

      lecture 18 was an exam, so it was not recorded.

    • @TrentTube
      @TrentTube 5 років тому

      I eventually concluded it was the exam I skipped :D

  • @meenakshisundaram8310
    @meenakshisundaram8310 3 роки тому

    Thank you very much

  • @utkarshtrehan9128
    @utkarshtrehan9128 3 роки тому

    Enlightenment

  • @taketaxisky
    @taketaxisky 4 роки тому

    How does overfitting affect the decomposed error terms? Maybe it is not relevant here.

    • @taketaxisky
      @taketaxisky 4 роки тому +1

      Just realize a graph in the lecture notes explains this!

  • @lorenzoappino9158
    @lorenzoappino9158 3 роки тому

    Killian is my hero

  • @Saganist420
    @Saganist420 5 років тому +5

    My real life dart playing skills have high bias, high variance.

  • @gaconc1
    @gaconc1 3 роки тому +1

    This is a form of the Pythagorean theorem

  • @bharatbajoria
    @bharatbajoria 4 роки тому

    Why is there no D at 37:00 in b^2?

    • @kilianweinberger698
      @kilianweinberger698  4 роки тому +2

      both terms y-bar and y are independent of the training data set D.

  • @kc1299
    @kc1299 4 роки тому

    disappears into some good feeling hahhaa

  • @logicboard7746
    @logicboard7746 2 роки тому

    Point @22:00

  • @deepfakevasmoy3477
    @deepfakevasmoy3477 4 роки тому +1

    24:56 please someone ask some question, I am not ready for war :)

  • @hohinng8644
    @hohinng8644 2 роки тому

    Everything is excellent except the poor handwritting