MIT 6.S191 (2023): Robust and Trustworthy Deep Learning
Вставка
- Опубліковано 21 тра 2024
- MIT Introduction to Deep Learning 6.S191: Lecture 5
Robust and Trustworthy Deep Learning
Lecturer: Sadhana Lolla (Themis AI, themisai.io)
2023 Edition
For all lectures, slides, and lab materials: introtodeeplearning.com
Lecture Outline
0:00 - Introduction and Themis AI
3:46 - Background
7:29 - Challenges for Robust Deep Learning
8:24 - What is Algorithmic Bias?
14:13 - Class imbalance
16:25 - Latent feature imbalance
20:30 - Debiasing variational autoencoder (DB-VAE)
23:24 - DB-VAE mathematics
27:40 - Uncertainty in deep learning
29:50 - Types of uncertainty in AI
32:48 - Aleatoric vs epistemic uncertainty
33:29 - Estimating aleatoric uncertainty
37:42 - Estimating epistemic uncertainty
44:11 - Evidential deep learning
46:44 - Recap of challenges
47:14 - How Themis AI is transforming risk-awareness of AI
49:30 - Capsa: Open-source risk-aware AI wrapper
51:51 - Unlocking the future of trustworthy AI
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!! - Наука та технологія
Everytime I go through one of the lectures, I have this feeling for you: God Bless You!
This lecture series are just incredible. Thank you Alexander and all other instructors for putting this together. Learned so much! And you are pushing the boundaries for AI learning!
Thank you for doing this important work!
Very Inspiring Lecture!
Before this, it had been not been easy to know where we could make the AI learn better, without manual diagnosis the training data.
Thank you for 6.S191 complete course
Please continue this great work.
Also cources on AI ,ML and data science.
Thanks for sharing!
Amazing: Great future for Themis!
Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!
Thank you :)
great, thanks for sharing
Thanks :)
she is very talented
great lecture
I've already complimented the lectures in another video. This is a comment just for the YT algorithm 🙏. Keep up the great work.
Hey, check out my Coursnap AI for this course! It has course outlines and course shorts, so you can get the gist of 1-hour in just 5 minutes. Try it out and boost your learning efficiency!
33:00
Very clear lecture.
(But maybe you have to explain the ''noise'' term a bit more)
where can i find or practice the Lab session?
Edit: I found it. It is in the website All Lab session
For the corresponding lab, capsa module is no longer found. Has it been removed? Where can I play with it? Thanks
49:30 - Capsa: Open-source risk-aware AI wrapper
Its disappointing to know that Capsa has been converted from an open source to a closed source by Themis AI.
Please also educate me; what should typically be the number of training samples per class for a deep learning network such as Yolo, Resnet, Transformers etc etc.?
Ideally all classes should have equal amounts of training samples (examples) for any deep learning network, but such situations are rare in practice. Also, the number of training samples should be as high as possible such that the network can learn the best generalization of the solution for the problem it tries to solve.
where is the lecture for diffussion models?
I didn't quite get it from the intro. Was Alexander simply reading from a script or is he a part of Themis AI?
I'm the founder and CTO
How do I come up with the variance of a single data point? (see @35:56) How does the variance of a single data point even make sense?
That's where my brain crashed. I hoped that someone answered this in the comment section but I couldn't find anyone beside your comment.
How in the world is she just a Undergraduate 😱
Having a good overview over a topic is often something you learn before learning the complexity.
@@KamillaMirabelle Sorry ?
@@convolutionalnn2582 meaning that a person with a Bachelor degree would know enough to talk about the topic and understanding which problems can occur and i big strokes way.. the complex answer to why is often what you learn at a master degree..
@@KamillaMirabelle What she know is really a complex problem and could even taught the entire undergraduate...She is great and intelligent
@@convolutionalnn2582 I have most of a Bachelor in theoretical mathematics from University of Copenhagen and I understand the problem in the same level of complexity and most of my co students do too. I don't know your background, but i am sure that given the right teachers and a little passion for the topic you would if not as good as her, then in the run up
can anyone explain for me why high variance means noise in data ,while the variance of any point in data depends on x values to be far or near the mean of all data, while the noise as i understand it could have the same value of x with different y values ,so how we detect noise with variance being high or not..This issue in aleatoric uncertainty
High variance doesn't mean noise rather it means that model is not able to learn that high variance and it is quantified through this variance output variable. To elaborate it, according to my understanding, in training data we have data from different groups (different groups means different level of variance for these groups as shown in fig @33:07). And if model is not able to completely fit the variance of a certain group then it gives of course bad results which is reflected and confirmed through this variance output and it means that models says that hey here is my prediction and here is the variance score if this is high it means the test data point came from the group of high variance in training set which model failed to learn(fit.)
what would i need to do to become an AI safety engineer? I already have a CS degree
this doesn't seem to be very open source.. yet..
Yes capsa package has been removed from PyPI.
Thank you :)
40:00