Bias/Variance (C2W1L02)

Поділитися
Вставка
  • Опубліковано 5 вер 2024
  • Take the Deep Learning Specialization: bit.ly/3amgU4n
    Check out all our courses: www.deeplearni...
    Subscribe to The Batch, our weekly newsletter: www.deeplearni...
    Follow us:
    Twitter: / deeplearningai_
    Facebook: / deeplearninghq
    Linkedin: / deeplearningai

КОМЕНТАРІ • 22

  • @exampreparationonline4166
    @exampreparationonline4166 4 роки тому +8

    In Deep learning era, we can overcome the bias-variance trade-off
    **bias variance trade-off**
    the - is the property of a set of predictive models whereby models with a lower in parameter estimation have a higher
    **high variance **
    when there is a huge difference between training set error and validation error
    train : 1%
    error
    dev / test : 11 %
    error
    **high bias
    **
    when the training set error doesn't even results in proper classification, maybe particular class is being predicted more i.e. false positive
    train : 15%
    dev : 16%
    **high bias and high variance
    **
    when the error on training set is poor, and there is a huge difference between training and validation error
    train : 15%
    dev : 30 %

    **low bias and low variance
    **
    when both the error on training set and validation set is less
    train : 1%
    dev : 2 %

  • @quietkael7349
    @quietkael7349 4 роки тому +41

    I think it would be important to explain why the terms “bias” and “variance” are used to describe these phenomena. Without explaining the context that our overall training algorithm is sampling specific outcome models from a distribution over all possible models that our algorithm might train, it’s not very clear what insight these terms add beyond the simpler concepts of overfitting and underfitting.

    • @MrCmon113
      @MrCmon113 4 роки тому +1

      Yeah.
      What's central to understanding this is imagining other possible training sets.

    • @bpc1570
      @bpc1570 3 роки тому +2

      What you are describing is related to notion of empirical risk minimization, which is explained in his cs229 class (lec 9 I believe) also searchable from here

  • @hamzaamjad699
    @hamzaamjad699 3 роки тому +24

    High bias----> underfitting -----> More train set error
    High variance -----> overfitting -----> More dev set error

  • @leiz6093
    @leiz6093 2 місяці тому

    The whole discussion around "bias-variance trade-off" was invented by some statisticians who were baffled when trying to fit (classic) machine learning model training/testing into the old statistical paradigm. And then, some interviewer still found it really effective in baffling (and disqualifying) some candidates applying for ML/Data scientist jobs and further popularized / abused the concept. The whole issue can be readily explained and addressed in a direct way without bringing in these two out-of-place and thus confusing terms. Thank Andrew for bringing clarity into this.

  • @khaledsrrr
    @khaledsrrr Рік тому

    The best and easiest explanation of bias variance tradeoff ❤❤

  • @Rkv224
    @Rkv224 5 років тому +8

    Thank You very much for making these concepts that easy to understand

  • @feedtowin1309
    @feedtowin1309 2 роки тому

    Thank you so much for your content, sir.

  • @sandipansarkar9211
    @sandipansarkar9211 3 роки тому

    Very nice explanation .Need to watch again

  • @plttji2615
    @plttji2615 2 роки тому

    Thank you for the video, can you help me how to prove that is unbiased in this question? Question: Compare the average height of employees in Google with the average height in the United States, do you think it is an unbiased estimate? If not, how to prove it is not matched?

  • @jorikchamberik4848
    @jorikchamberik4848 6 років тому +20

    Is the dev set the same as validation set?

    • @raghavgupta2794
      @raghavgupta2794 6 років тому +20

      Yes, it's just another name for validation set

    • @paradise_relaxation
      @paradise_relaxation 4 роки тому

      yes

    • @MrCmon113
      @MrCmon113 4 роки тому

      Yes, this applies equally to the test set or some completely external population, though.

  • @siabikebenezer
    @siabikebenezer 5 років тому +1

    Please is it possible to calculate bias for the Actual and predicted values

    • @saanvisharma2081
      @saanvisharma2081 5 років тому

      Yes we can!!!!
      I know how to do that in linear regression, but have to relate it with multiple regression/complex algorithms

  • @PabbaANUBHARATH
    @PabbaANUBHARATH 5 років тому

    wow. clear explanation

  • @Ujjayanroy
    @Ujjayanroy 8 місяців тому

    are you GM Eric Hansen's brother or something?

  • @lafcadiothelion
    @lafcadiothelion 9 місяців тому

    The audio hurts my ears.

  • @poojakabra1479
    @poojakabra1479 2 роки тому

    You are God

  • @Sam-ne9ns
    @Sam-ne9ns 3 роки тому +3

    What is this mess?! You actually charge people for this crap?!!!