Amazing video👍. I have request to you, can you make a video on text-to-image generation models (Stable Diffusion, DALL·E) to help us understand how they work, from basic to advanced, including deep mathematical operations ?
This is the optimal amount of math in my opinion, I feel there are a lot of videos going through the intuition but there are a lot of important details missing when we dont look into the math. It is similar to learning about a data structure or an algorithm without actually programming it. I think it is an essential part of understanding more deeply a subject
Research presenters often times really suck at connecting the math in their work with intuition. I'm glad 3b1b has set the standard in math communication, that's why more and more people are up to this level of communicating difficult topics
I completely agree that this amount of math is perfect. Lots of channels don't go in depth/technical and it's useless to watch. I like when the videos are less hand wavy in their description. Keep up the good work with more math videos like this!!
Wow honestly i been struggling to find a good deep learning channel which balances rigor with the right amount of intuition, this is perfect! Looking forward to the videos !!
I'm a grad student working on theoretical machine learning and i have to say this was a perfect explanation of the Langevin algorithm and score matching. You really found the sweet spot between mathematical rigour and conceptual intuition, and the gorgeous manim animations make it all super easy to visualise. Your channel will blow up soon, trust me
Great video! Sorry if my comment on your previous video was a little harsh, I've edited it now. Hope it was constructive, and you've done a fantastic job diving a bit deeper! Looking forward to part 3! There's some interesting parallels in the maths to compressive sensing too!
3:19 - I don't understand the justifications for the simplification of distributed equation. It seems to require E[x^ | y] == x^. This would happen if you interpret x^ as a constant, but how would you justify it?
Which part specifically do you think requires E[x^ | y] == x^ ? I rewatched the deruvations at that time stamp but i can't see what you're referring to
@@StratosFair Equation is expanded and simplified at one step just next. It is visible at 3:22. For example term ||x^||^2 has already lost its E[... | y] part.
@@woowooNeedsFaith ok I see, actually this is justified because x^ is implicitly assumed to be a function of y, i.e. we have x^ = f(y) for some (unknown) measurable function f (this makes sense if you look at how the algorithm works). Then the equality E[x^ | y] = x^ follows from some basic properties of conditional expectation. If this doesn't feel right to you, try to look up a proof of why the conditional expectation minimizes the L2 loss, i think you should be able to find one on wikipedia.
At 4:16, shouldn't it be x instead of y marked in orange on the horizontal axis? Since p(y|x) is (according to thd gaussian hypothesis) gaussian centered at x.
x is varying on the horizontal axis so p(y|x) is centered on y (the gaussian likelihood is symmetric) ! But to be fair this is just for the intuition and not meant as a rigorous representation :)
I do not get it, to be honest. I get it a little more than I ever have. But I still don't get it. I understood some of the other vids more. Here, my trouble is probably to do with statistics: prior, posterior, likelihood. Maybe I'd be best off returning to this after learning those concepts.
Many statistics concepts (such as the MAP, MMSE, maximum likelihood, EM, etc..) are everywhere in machine learning, so it'll be well worth it to learn about those
Can say that learning the math behind these things are suuuuper helpful. Have spent the last year in statistics and probability theory. Before it, I did not grasp things like this, now I can (if I really try)
@@szinifer6040 Just know that it took me one full year of reading papers and doing actual research, just to understand these topics. And I'm sure there are still a lot of things that I don't understand. So it's ok if you don't get everything after a 20min video, it's more of an invitation to dive deeper. :)
You're doing a fantastic job! I have a quick question: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
I love your content, but sometimes there is too much math, and I skip it 😓. I personally prefer the ideas, resulting solution and concrete performance examples with loss graphs, instead of derivation
@@Deepia-ls2fo btw, I'm some sort of youtuber myself, and I you want free review of your script or rendered video, you can ask me, since I'm going to watch your video anyway
With due respect to commenter's thought, I want to say that I love the math inclusion in your videos and I really think this makes your channel really stands out. As a suggestion maybe splitting videos into parts with some of which includes heavy math and others explains intiutions be a possible good approach.
Respectfully, i think your opinion is in the minority (looking at the other comments). There are so many channels out there giving hand wavy and purely intuition based intro to those ml algorithms, so it's nice having someone explaining some of the maths behind (and trust me he could have added a looooot more maths if he wanted). Also if you don't want the mathy stuff you can just skip that part and focus on the other explanations and visualizations ;)
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/Deepia .
You’ll also get 20% off an annual premium subscription.
Amazing video👍.
I have request to you, can you make a video on text-to-image generation models (Stable Diffusion, DALL·E) to help us understand how they work, from basic to advanced, including deep mathematical operations ?
@MeetWithMR Thanks, that's something I'd love to cover ! There are other high priority topics that I'll focus on though. :)
This is the optimal amount of math in my opinion, I feel there are a lot of videos going through the intuition but there are a lot of important details missing when we dont look into the math. It is similar to learning about a data structure or an algorithm without actually programming it. I think it is an essential part of understanding more deeply a subject
Research presenters often times really suck at connecting the math in their work with intuition. I'm glad 3b1b has set the standard in math communication, that's why more and more people are up to this level of communicating difficult topics
I completely agree that this amount of math is perfect. Lots of channels don't go in depth/technical and it's useless to watch. I like when the videos are less hand wavy in their description. Keep up the good work with more math videos like this!!
Amazing video - love that you don’t ignore the maths! I find the math helps my intuition a lot
Wow honestly i been struggling to find a good deep learning channel which balances rigor with the right amount of intuition, this is perfect! Looking forward to the videos !!
I'm a grad student working on theoretical machine learning and i have to say this was a perfect explanation of the Langevin algorithm and score matching. You really found the sweet spot between mathematical rigour and conceptual intuition, and the gorgeous manim animations make it all super easy to visualise. Your channel will blow up soon, trust me
Great tutorial! Clear and precise explanation of underlying mathematics without too many jargon.
Absolute cinema!! Everything from the smooth animations, clear explanations, and music.
Amazing explanations with great animations as usual ! Thanks a lot for your work. Can't wait to see more.
Thank you for showing the math derivations as well, this is absolutely one of the best channels on UA-cam. Thanks a lot
Awesome job man - really enjoyed watching
Thanks, love your work too :)
Excellent video! great depth, explanations, and visualizations, keep it up!
Amazing video! Hats off! Please try to upload videos more frequently.
Great video! Sorry if my comment on your previous video was a little harsh, I've edited it now. Hope it was constructive, and you've done a fantastic job diving a bit deeper! Looking forward to part 3! There's some interesting parallels in the maths to compressive sensing too!
Great video series! Would be very nice if you could add videos dealing with timeseries data as well e.g. RNNs!
Incredible animations and amazing explanations. Really thanksss
Fantastic channel man! This is pure gold!
Thanks a lot !
I really enjoyed watching. It is so great that you include math❤
I love your videos, they are the perfect blend!
This is phenomenal, thanks for this dude!
Thank you so much !!!
You just saved me a lot of time for doing the state of the art of my master thesis
Glad it helps ! Don't hesitate to share the channel around to other students :)
@Deepia-ls2fo I shared it with the teacher
Amazing video ❤
I love computer science and math!
Again a fantastic video! And again, I got interested in the songs you use. Could you please add the names of the songs you use in your videos?
3:19 - I don't understand the justifications for the simplification of distributed equation. It seems to require E[x^ | y] == x^. This would happen if you interpret x^ as a constant, but how would you justify it?
Which part specifically do you think requires E[x^ | y] == x^ ? I rewatched the deruvations at that time stamp but i can't see what you're referring to
@@StratosFair Equation is expanded and simplified at one step just next. It is visible at 3:22. For example term ||x^||^2 has already lost its E[... | y] part.
@@woowooNeedsFaith ok I see, actually this is justified because x^ is implicitly assumed to be a function of y, i.e. we have x^ = f(y) for some (unknown) measurable function f (this makes sense if you look at how the algorithm works). Then the equality E[x^ | y] = x^ follows from some basic properties of conditional expectation.
If this doesn't feel right to you, try to look up a proof of why the conditional expectation minimizes the L2 loss, i think you should be able to find one on wikipedia.
8:00 wouldn't applying integration by parts to the second term give xp(y)-xp(y)=0?
Thank you for the great content, keep going 🎉😊
At 4:16, shouldn't it be x instead of y marked in orange on the horizontal axis? Since p(y|x) is (according to thd gaussian hypothesis) gaussian centered at x.
x is varying on the horizontal axis so p(y|x) is centered on y (the gaussian likelihood is symmetric) !
But to be fair this is just for the intuition and not meant as a rigorous representation :)
Awesome!
Amazing!!
3:54 where have we assumed gaussian noise so far?
More bayesian deep learning!
Brilliant! The biggest disappointment was checking your channel and seeing the diffusion model video is yet-to-come
Emma langevin getting un adjusted with this one
I do not get it, to be honest. I get it a little more than I ever have. But I still don't get it. I understood some of the other vids more. Here, my trouble is probably to do with statistics: prior, posterior, likelihood. Maybe I'd be best off returning to this after learning those concepts.
Many statistics concepts (such as the MAP, MMSE, maximum likelihood, EM, etc..) are everywhere in machine learning, so it'll be well worth it to learn about those
Can say that learning the math behind these things are suuuuper helpful. Have spent the last year in statistics and probability theory. Before it, I did not grasp things like this, now I can (if I really try)
Damn I'm dumb as hell
Incorrect! We ARE Groot!
@@szinifer6040 Just know that it took me one full year of reading papers and doing actual research, just to understand these topics.
And I'm sure there are still a lot of things that I don't understand.
So it's ok if you don't get everything after a 20min video, it's more of an invitation to dive deeper. :)
@@Deepia-ls2fo thx bro
You're doing a fantastic job! I have a quick question: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
gg
I love your content, but sometimes there is too much math, and I skip it 😓. I personally prefer the ideas, resulting solution and concrete performance examples with loss graphs, instead of derivation
Finding the balance between the math and animations is always the most difficult part.
This time I might have added too much math :C
@@Deepia-ls2fo btw, I'm some sort of youtuber myself, and I you want free review of your script or rendered video, you can ask me, since I'm going to watch your video anyway
With due respect to commenter's thought, I want to say that I love the math inclusion in your videos and I really think this makes your channel really stands out. As a suggestion maybe splitting videos into parts with some of which includes heavy math and others explains intiutions be a possible good approach.
Respectfully, i think your opinion is in the minority (looking at the other comments). There are so many channels out there giving hand wavy and purely intuition based intro to those ml algorithms, so it's nice having someone explaining some of the maths behind (and trust me he could have added a looooot more maths if he wanted). Also if you don't want the mathy stuff you can just skip that part and focus on the other explanations and visualizations ;)
@@Deepia-ls2foplease keep the math in, I want to understand the actual proscess not just idea of it.