MIT 6.S191: Uncertainty in Deep Learning
Вставка
- Опубліковано 27 тра 2022
- MIT Introduction to Deep Learning 6.S191: Lecture 10
Uncertainty in Deep Learning
Lecturer: Jasper Snoek (Research Scientist, Google Brain)
Google Brain
January 2022
For all lectures, slides, and lab materials: introtodeeplearning.com
Lecture Outline - coming soon!
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!! - Наука та технологія
Thank you MIT
O.O.D means x was not even in the training set. P_test(y.x) n.eq P_train (y.x) may also mean wrong classification or open set i.e. not seen during training(feature vector not within bounds of vectors in the training set)
In the deep ensemble method the uncertainty corresponds to which particular classifier? Is it an assumption that the resulting uncertainty corresponds to the arcitecture with a near to optimal hyperparameters? It rationally should but overall sounds very handwavy. On top of that it is an uncertainty of the classifier evaluated on a training domain. How does it change on the OOD dataset?
Hey Alex. Hope you're well.
Is the 2023 course going to be free too?
If yes, when does it go live?
Thanks! We are actually announcing the premiere today! The first release will be March 10 and a new lecture will be released every Friday at 10am ET.
Please post the slides as indicated in the URL descriptor. Thank you.
You can find the slides from the NeurIPS tutorial here: docs.google.com/presentation/d/1savivnNqKtYgPzxrqQU8w_sObx1t0Ahq76gZFNTo960
Спасибо )) MIT ))))))
Can someone answer my basic question? The speaker defines confidence as predicted probability of correctness. I am guessing this is NOT the same as yprob, which is predicted probability of postive class that a trained model returns for every test instance. So, how does one get an estimate the confidence?
If you are referring to 13:30, then for a binary classification confidence is exactly what you are saying, p(y=1|x).
@anvarkurmukov2438 Thanks for answering. I guess it is a bit about terminology. In this notion of confidence, overfitted models will confidently make wrong predictions. I was referring to uncertainity in those predictions aka confidence bounds of predicted scores. I have since figured out how to estimate those by bootstrapping.
This tutorial saved my ass