Why Does Diffusion Work Better than Auto-Regression?
Вставка
- Опубліковано 26 чер 2024
- Have you ever wondered how generative AI actually works? Well the short answer is, in exactly the same as way as regular AI!
In this video I break down the state of the art in generative AI - Auto-regressors and Denoising Diffusion models - and explain how this seemingly magical technology is all the result of curve fitting, like the rest of machine learning.
Come learn the differences (and similarities!) between auto-regression and diffusion, why these methods are needed to perform generation of complex natural data, and why diffusion models work better for image generation but are not used for text generation.
The following generative models were featured as demos in this video:
Images: Adobe Firefly (www.adobe.com/products/firefl...)
Text: ChatGPT (chat.openai.com)
Audio: Suno.ai (suno.ai)
Code: Gemini (gemini.google.com/app)
Video: Lumiere (Lumiere-video.github.io)
Chapters:
00:00 Intro to Generative AI
02:40 Why Naïve Generation Doesn't Work
03:52 Auto-regression
08:32 Generalized Auto-regression
11:43 Denoising Diffusion
14:19 Optimizations
14:30 Re-using Models and Causal Architectures
16:35 Diffusion Models Predict the Noise Instead of the Image
18:19 Conditional Generation
19:08 Classifier-free Guidance
At first I thought "oh, another random video explaining the same basics and not adding anything new", but I was so wrong. It's an incredibly clear explanation of diffusion, and the start with the basic makes the full picture much clearer. Thank you for the video!
You should check the rest of his videos. All are of sublime quality
> makes the full picture much clearer
hehe did it help denoise
I mean it's a bit over simplified...
Diffusion these days for example could implement any number of methods.
To know more of an advanced technical perspective you could join this server where we research and study on all forms of ai aspecialy generative ai prompting, theoretical ways to run computation of ai neutral networks and tandems such as quantum networks. We help also suggest and invent theoretical applications of the ai and also ways in which to enhance the systems ect.
Next video will be on Mamba/SSM/Linear RNNs!
great! Also maybe think about the Tradeoff between scaling and incremental improvements, in case your perspective is, that LLM´s also always approximate the data set and therefore memorize rather than any "emergent capabilities". So that ChatGPT also does "only" curve fitting.
I am student who is pursuing a degree in ai and we want more of your videos for even simplest of the concepts in ai, trust me this channel will be a huge deal in the near future, good luck!!
Well take my subscription then!!1111
From where did you learn, all these also have to tried to code for the same?
kinda sorry to my professors and seniors but this is the single best explanation of logics behind each models. About dozen min vid > 2 years of confusion in univ
Man I love the fact that you present the fundamental idea with an Intuitionistic approach, and then discuss the optimization.
This is a much better explanation than the diffusion paper itself. They just went all around variational inference to get the same result!
The clearest and most concise explanation of diffusion model I've seen so far. Well done.
This genius only makes videos occassionally, that are not to be missed.
absolutely true
Such an underrated video, I love how you went from the basic concepts to complex ones and didn't just explain how it works but also the reason why other methods are not as good/efficient.
I will definitely be looking forward to more of your content!
This is literally the best explanation of the diffusion models I have ever seen.
The way you tell the story is fantastic! I am surprised that all AI/ML books are so terrible at didactics. We should always start at the intuition, the big picture, the motivation. The math comes later when the intuition is clear.
I have seen the "math-first, intuition later or never" approach in a lot of teaching. High school and college math, physics and programming classes are rife with this approach. I agree it's sub-optimal for most students. I have some vague ideas about why this approach perpetuates itself and I have seen a lot of gatekeeping around learning in a bottom up way. It's lovely to see some educators like AlgorithmicSiplicity and Three Blue One Brown break things down in much more intuitive way that then allows us to understand the maths.
@@dustinandrews89019I think the main reason is time. Most university courses are 8 weeks in my case and there simply isn't enough time to explain all the details in theory behind electronics or math for example. My learning is terrible when I am just given a formula for a particular problem, it's useless to me. Instead I end up spending days understanding who came up with the formula and why before I derive it myself and then I will never forget it since it becomes part of my intuition.
Another reason I've noticed is sadly lack of deeper understanding from some teachers. They themselves only memoriesed the solution for the problem but they don't really fully understand the problem or the solution, in my opinion they are unfit for teaching. A teacher should never be worried about a student asking why.
I have trained my own diffusion models and it required me to do a deep dive of the literature. This is hands down the best video on the subject and covers so much helpful context that makes understanding diffusion models so much easier. I applaud your hard work, you have earned a subscriber!
You realize these models contain their dataset, right? And that’s the only way they can work.
This is seriously one of the best explainer videos i've ever seen. I've spent a long time trying to understand diffusion models and not a single video has come close to this one
Holy shit, at 11:03 I suddenly realised what you were cooking! I've been trying to find a way to articulate this interesting relationship between autoregression and diffusion for ages (my thesis developed diffusion models for tabular data). This is such a brilliantly-visualised and intuitively explained video! Well done. And the classifier-free guidance explanation you threw in at the end has got to be some of the most high-ROI intuition pumping I've seen on UA-cam.
This is a great explanation on how image decoders work. I haven't seen this approach and narrative direction yet.
This now makes my reference for explaining it to people that got no idea.!
You truly understand how to simplify... to engage our imagination... to employ naive thought or ideas to make comparisons to bring across a deeper more core principles and concepts to make the subject for more easier to grasp and get an intuition for. Algorithmic Simplicity indeed... thank you for your style of presentation and teaching. love it love it... you make me know what question I want to ask but didn't know I wanted to ask. UA-cam needs your contribution in ML education. please don't forget that.
Kudos for an incredibly intuitive explanation! Really loved the visual representations too!!
this is insane. I feel bad for getting this level of content for free
Very good job.
My suggestion is that you explain more about how it actually works, that the model learns to understand complete sceneries just from text prompts.
This could fill its own video.
Also it would be very nice to have a video about Diffusion Transformers like OpenAIs Sora probably is.
Also it could be great to have a Video about the paper "Learning in High Dimension Always Amounts to Extrapolation".
best wishes
Thanks for the suggestions, I was planning to make a video about why neural networks generalize outside their training set from the perspective of algorithmic complexity. That paper "Learning in High Dimension Always Amounts to Extrapolation" essentially argues that the interpolation vs extrapolation distinction is meaningless for high dimensional data, and I agree, I don't think it is worth talking about interpolation/extrapolation at all when explaining neural network generalization.
@@algorithmicsimplicity yes true. It would be great also because this links back to the LLM´s discussions, wether scaling up Transformers actually brings up "emergent capabilities", or if this is simple and less magical explainable by extrapolation.
Or in other words: either people tend to believe, that Deep Learning Architectures like Transformers only approximating their training data set, or people tend to believe, that seemingly unexplainable or unexpected capabilities emerge while scaling.
I believe, that extrapolation alone explains really good why LLM´s work so well, especially when scaled up AND that LLM´s "just" approximate their training data (curve fitting). This is why i brought this up ;)
I have a graduate degree in this shit and this is by far the clearest explanation of diffusion I've seen. Have you thought about doing a video running over the NN Zoo? I've used that as a starting point for lectures on NN and people seem to really connect with that paradigm
This has been on my watch later for 3 months. Finally got to watching it, glad I did. This is an exceptional explanation of the technologies at play here.
wow, this is such an amazing resource. I'm glad I stuck around. This is literally the first time this is all making sense to me.
This channel is gold, I'm glad I've randomly stumbled across one of your vids
Diffusion doesn't necessarily work better than auto-regression. The "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction" paper introduces an architecture they call VAR that upscales noise using an AR model and this currently out-performs all diffusion models in terms of speed and accuracy.
This is refreshing to watch in a sea of people who don't know what they're talking about and decide to make "educational" videos on the subject anyways. The simplifications are often harmful.
This is actually one of the best if not the best deep learning related video on UA-cam
Thanks for your efforts
So glad this came across my recommended feed! Fantastic explanation and definitely cleared up a lot of confusion I had around diffusion models.
This must be one of the best and concise explanations I've seen!
Thank you for the explanation. I already knew a little bit about diffusion but this is exactly the way I'd hope to learn. Start from the simplest examples(usually historical) and progresivelly advance, explaining each optimisation!
Great video. Love the pacing and how you distiled the material into such an easy to watch video. Great job!
bruh, I loved your contents. Other channel/video usually explain general knowledge that can be easily found on internet. But you're going deeper to the intrinsic aspects of how the stuff works. This video, and one of your video about transformer, are really good.
Wow, it requires really deep understanding and a lot of work to make videos this clear that are also so correct and insightful. Very impressive!
This has helped me so much wrapping my head around this whole subject!
Thank you for now, and the future!
I really enjoyed this video!! took a lot of notes while watching it too. you have a god tier ability to explain concepts in an easy to follow way
Never knew youtube could give random suggestion to videos like these. This was mind blowing. The way you teach is work of art.
A person with very less background can understand what he describes here.. commenting to make youtube so it gets recommended for other ..
wonderful video! really good one
Thanks for taking the time and effort of making and sharing these videos and Your knowledge.
Kudos and best regards
If there will be a longer version of this video, it might be worth mentioning VAE as well.
Thanks for the suggestion.
Fantastic explanation. Very intuitive
This was a good watch, thank you :)
Great video....already waiting for your next video
Bro, this is amazing!!! Your explanation is so clear, like it
Very nicely and simply explained! Keep it up
very nice visual explanations
You’re him 🙌🏽. Thank you so much. Getting this kind of information or well explanation is not easy with all the “BREAKING AI NEWS !😮‼️” on UA-cam now.
What a great explanation!
Wow, fantastic video. Such clear explanations. I learned a great deal from this. Thank you so much!
The best explanation I've seen. Great work.
So simple ! Thank you.
This is a great video. I have watched videos in the past (years ago) talk about auto-regression and more lately talk about diffusion. But it's nice to see why and how there was such a jump between the two. Amazing! However, I feel this video is a little incomplete when there was no mention of the enhancer model that "cleans up" the final generated image. This enhancing model is able to create a larger image while cleaning up the six fingers gen AI is so famous for. While not technically a part of the diffusion process (because it has no random noise) it is a valuable addition to image gen if anyone is trying to build their own model.
The best explanation I have found on the internet so far. 👍
such complete explanations, keep it up thank you
This is an amazing quality video! The best conceptual video on diffusion in AI I've ever seen.
Thanks for making it!
I'd love to see you cover RNNs.
Hands down the best video that explains how these models work. I love that you explain these topics in a way that resembles how the researchers created these models. Your video shows the thinking process behind these models, combined with great animated examples, it is so easy to understand. You really went all out. Only if youtube promoted these kinds of videos instead of brainrot low quality videos made by inexperienced teenagers.
Awesome video! I would love to see a video about graph neural networks
Soooo Good!!! Thanks for making it!!!!
Great video and amazing content quality!
Thank you! This is great explanation❤
Awesome explanation 👏
I finally understand how models like Stable Diffusion work now! I tried understanding them before but got lost at the equation (17:50), but this video describes that equation very simply. Thank you!
A very good job, I have deepened my understanding of generative AI
Very good video. I get to konw the straigforward reason: why diffusion idea emerges and why diffusion is intrinsically better than autogression algorithm.
Amazing video !
Amazing content I now want to implement this
great work
nice explanations, although, i've already knew about diffusion. examples from simplest to final diffusion -- were a really nice touch.
Wow, that was a fantastic explanation.
Brilliant explanation. Thank you very much
very good explanation
great video... loved it!
Love the conclusion
Great explanations, thank you!
Well done!
what a great rare content!
It was too easy understanding👌🏻👌🏻
Great explanation!👍👍, I personally would like to see a video observing all major types of neural nets with their distinctions, specifics, advantages, disadvantages etc. the author explains very well 👏👏
Boah what a good explanation. I alwa6was wondering how these big NN like chatgpt and dalle are working. Thank you
Thank you. This video is wonderful
So good!
Great video, congrats.
good stuff, thanks
Some2 really brought out some good channels
Great video!
Great explanation
Very good video. Thank you
Solid video!
Still feels like magic to me 🙌🙌
I like to think of ML as a funky calculator. Instead of a calculator where you give it inputs and an operation and it gives you an output, you give it inputs and outputs and it gives you an operation.
You said it's like curve fitting, which is the same thing, but I like thinking the words funky calculator because why not
Great video! Focus on the right elements.
I get it now! 🎉 thanks!
Thank you very much.
I enjoyed the first part, the first 10 seconds.
After, there are too any shortcuts in the explanations that I struugled to understand and be able to explain it again to myself. Still, I subscribed.
As for suggestions for other videos, I'll check whether you have explained the U-Net already. If not I'd appreciate to have the same kind of explanation about it.
i now understand things, thanks!
thanks for explaining
Just subscribed. Great video
I think it would help to mention that the auto-regressors may be viewing the image as a sequence of pixels (RGB vectors). Overall excellent video, extremely intuitive.
In general, auto-regressors do not view images as a sequence. For example, PixelCNN uses convolutional layers and treats inputs as 2d images. Only sequential models such as recurrent neural networks would view the image as a sequence.
@@algorithmicsimplicity of course, but I feel mentioning it may help with intuition as you’re walking through pixel by pixel image generation
Great visualisation! Good job!
Maybe next video on LoRA or ControlNet ?
Great suggestions, I will put them on my TODO list.
+1 for LoRA
Great video
Excellent.
So do the recent large world model breakthroughs of Sora, Luma, Runway alpha imply that we've returned to auto regressive? Are they a combo of the two? Amazing video, would love to hear your thoughts!
From what little they have released publicly, it seems that they are simply diffusion models applied to videos, i.e. they treat videos as a collection of frames, add noise to all frames, take all noisy frames as input and try to predict all clean frames. I don't think there is any auto-regression done, but maybe that will change when they start generating longer videos.
In your neural network animations, the traveling highlight starts from the image, goes through the neural net, then to the output pixel. I understand this as information traveling forward. When the highlights reverse direction, does this represent back propagation at the regressed value of the pixel? Great video by the way!
Yep it's just meant to demonstrate the weights in the network changing based on the error in the predicted value.
Video is a banger