MIT 6.S191: Uncertainty in Deep Learning

Поділитися
Вставка
  • Опубліковано 27 тра 2022
  • MIT Introduction to Deep Learning 6.S191: Lecture 10
    Uncertainty in Deep Learning
    Lecturer: Jasper Snoek (Research Scientist, Google Brain)
    Google Brain
    January 2022
    For all lectures, slides, and lab materials: introtodeeplearning.com​​
    Lecture Outline - coming soon!
    Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
  • Наука та технологія

КОМЕНТАРІ • 12

  • @vimukthirandika872
    @vimukthirandika872 Рік тому +1

    Thank you MIT

  • @ajit60w
    @ajit60w 2 роки тому

    O.O.D means x was not even in the training set. P_test(y.x) n.eq P_train (y.x) may also mean wrong classification or open set i.e. not seen during training(feature vector not within bounds of vectors in the training set)

  • @vyacheslavli9254
    @vyacheslavli9254 Рік тому

    In the deep ensemble method the uncertainty corresponds to which particular classifier? Is it an assumption that the resulting uncertainty corresponds to the arcitecture with a near to optimal hyperparameters? It rationally should but overall sounds very handwavy. On top of that it is an uncertainty of the classifier evaluated on a training domain. How does it change on the OOD dataset?

  • @zigzag4273
    @zigzag4273 Рік тому +1

    Hey Alex. Hope you're well.
    Is the 2023 course going to be free too?
    If yes, when does it go live?

    • @AAmini
      @AAmini  Рік тому +1

      Thanks! We are actually announcing the premiere today! The first release will be March 10 and a new lecture will be released every Friday at 10am ET.

  • @TheEightSixEight
    @TheEightSixEight Рік тому +1

    Please post the slides as indicated in the URL descriptor. Thank you.

    • @gulsenaaltntas5398
      @gulsenaaltntas5398 Рік тому +1

      You can find the slides from the NeurIPS tutorial here: docs.google.com/presentation/d/1savivnNqKtYgPzxrqQU8w_sObx1t0Ahq76gZFNTo960

  • @nikteshy9131
    @nikteshy9131 2 роки тому

    Спасибо )) MIT ))))))

  • @thatapuguy2768
    @thatapuguy2768 Рік тому

    Can someone answer my basic question? The speaker defines confidence as predicted probability of correctness. I am guessing this is NOT the same as yprob, which is predicted probability of postive class that a trained model returns for every test instance. So, how does one get an estimate the confidence?

    • @anvarkurmukov2438
      @anvarkurmukov2438 7 місяців тому

      If you are referring to 13:30, then for a binary classification confidence is exactly what you are saying, p(y=1|x).

    • @SunilKalmady
      @SunilKalmady 7 місяців тому

      ​@anvarkurmukov2438 Thanks for answering. I guess it is a bit about terminology. In this notion of confidence, overfitted models will confidently make wrong predictions. I was referring to uncertainity in those predictions aka confidence bounds of predicted scores. I have since figured out how to estimate those by bootstrapping.

  • @drxplorer778
    @drxplorer778 5 місяців тому

    This tutorial saved my ass