at 5:58 " ... the variance comes from the network" This is not right. In DDPM, the authors made it constant and then in later studies people started to make those learnable as well.
@@moeinshariatnia59 In that formulation it is. Later in video I mentioned that it’s not necessary and model has to only predict the noise sampled from zero mean unit variance.
Nice explanation.. thank you.
Very clear explanation. Thank you.
You helped me a lot! Thanks! Please keep going~
@@jiananwang2681 Thanks for watching🙂
awesome soroush. nice and clear explain
@@armanhatami5706 Thanks 😃
you are great , please keep going
Thanks Ahmed! Appreciate it.
Awesome, I'd love to see a video about how high-fidelity VAEs work and how they're trained.
Thank you, very clear.
Great video, keep it up!
@@vidaadelimosabeb6689 Thanks😃
Best❤
Nice short explanation!
Thanks!
Wow! great video. thanks a lot.
@@HassanHamidi-v8s Thanks for watching it 🙂
Brilliant! thank you!!
@@oblivitus. Thanks for watching
at 5:58 " ... the variance comes from the network"
This is not right. In DDPM, the authors made it constant and then in later studies people started to make those learnable as well.
@@moeinshariatnia59 In that formulation it is. Later in video I mentioned that it’s not necessary and model has to only predict the noise sampled from zero mean unit variance.
@@soroushmehraban It's so funny you delete the comment instead of correcting the mistake :))
@@moeinshariatnia59 I didn't delete anything. It wasn't a mistake. Sorry if my explanation was ambiguous.
It's not a mistake. That's simply the general formulation, then they mention that they decided to keep it fixed
clear!
@@yipengsun8624 Thanks!
I love you
Ariane Gateway