To me, the critical and effective way of educating and enlightening is the step-by-step reasoning coupled with powerful animations. This video has certainly achieved that. Thanks so much!
Nice explanation but I think two key aspects are missing (maybe planned to show up in later videos): 1. the connection to transformerts. 2. the fact that latent space allows you to make two models speek the same language (like the idea of CLIP and how its used in DallE)
Hi, thank you for the feedbacks ! Indeed these aspects are very important in modern architectures, but I feel like I would need to introduce a lot of other concepts to get there. It's definitely something I'll treat in future videos.
Thanks for the comment, in fact taking a simple interpolation is perfectly fine when your latent space is "in order". It should have some properties like being somewhat continuous, which is not imposed by a simple autoencoder. However VAEs do have such a latent space.
Could you make a video on common dimensionality reduction methods like PCA and projection (linear discrimants) etc? I’ve always been interested in when they should be applied but not the other. Anyways, nice video very underrated! Deserves more exposure! T^T
Thank you ! Yep that's the plan for the very next video: it will be an explanation of how several visualization methods work, there will probably be PCA, t-SNE and UMAP
great video. knew about encoders from the transformer model where the optimization criterion for embedding is the output of the decoder for the classification/generation task measured by eg. cross entropy loss and i know about word2vec where the optimization criterion is dot product similarity of co-occuring words. i did not know that in autoencoders the optimization criterion is minimizing the loss over reconstructing the original input. nice.
Thank you ! Indeed the voiceover is generated by an AI, but it is my own voice that I cloned. I'm using Elevenlabs. Did that annoy you or got you out of the video ? :(
To me, the critical and effective way of educating and enlightening is the step-by-step reasoning coupled with powerful animations. This video has certainly achieved that. Thanks so much!
Thank you for your comment !
Legendary algorithm pull. I love educational content like this. Road to 1M!
Thanks :)
Excellent pace and choice of words. A video on UNET would be great
Awesome video and animations bro. Its so amazing!! Keep doing more videos, I'll stay tuned!
Thank you, I'm not planning on stopping yet :)
Thank you very much for your videos. I am waiting for the next one about the VAE.
Thanks, hope I can post it this month :)
Clear and concise explanations, awesome!
@@gabberwhacky Thanks
This channel is so underrated. Amazing explanation!
Thanks :)
Nice explanation but I think two key aspects are missing (maybe planned to show up in later videos):
1. the connection to transformerts.
2. the fact that latent space allows you to make two models speek the same language (like the idea of CLIP and how its used in DallE)
Hi, thank you for the feedbacks ! Indeed these aspects are very important in modern architectures, but I feel like I would need to introduce a lot of other concepts to get there.
It's definitely something I'll treat in future videos.
Awesome content.❤ The reasoning and intricate animation are mindblowing. Eagerly waiting for VAE video 😊
Thanks !
Thanks for making such an intuitive and insightful video! Cant wait for more content from this channel!
@@thmcass8027 Thanks !
i just found your channel and fall in love with it. thank you !
Thanks for the kind words !
Very good and easy yo understand content, i love when channels like yours make hard concept that easy to understand.
Thank you !
Good job man! Nice graphical representations. Easy to follow.
Thank you so much !
What would you do if you wanted to find a middle between two points in latent space if simple interpolation produces garbage results?
Thanks for the comment, in fact taking a simple interpolation is perfectly fine when your latent space is "in order".
It should have some properties like being somewhat continuous, which is not imposed by a simple autoencoder. However VAEs do have such a latent space.
Great video! Waiting for the one on VAEs and other topics
@@aryankashyap7194 Thanks, it will probably be up before the end of the summer :)
this channel is a hidden gem!
Thank you !
Thank you! You made it lucid.
Thank you for your comment !
Perfect animated and well explained. Thank you 👍 subscribed 😊
Thank you !
Could you make a video on common dimensionality reduction methods like PCA and projection (linear discrimants) etc? I’ve always been interested in when they should be applied but not the other. Anyways, nice video very underrated! Deserves more exposure! T^T
Thank you ! Yep that's the plan for the very next video: it will be an explanation of how several visualization methods work, there will probably be PCA, t-SNE and UMAP
great video. knew about encoders from the transformer model where the optimization criterion for embedding is the output of the decoder for the classification/generation task measured by eg. cross entropy loss and i know about word2vec where the optimization criterion is dot product similarity of co-occuring words. i did not know that in autoencoders the optimization criterion is minimizing the loss over reconstructing the original input. nice.
Thanks a lot !
Can you make a video on RNN and its variants?
Hi Sharjeel thanks for your comment !
RNN and other auto-regressive models are definitely on my to-do list. :)
Nice video, keep it up
@@stormaref Thanks !
8:02 Principal Component Analysis? 😉
or tsne/umap
My name jeff.
Hi jeff
Thanks for this wonderful content.
Thank you !
Great content, hope you can get more exposure!
Thanks :)
incredibly good content.keep up the good work!
Thank you !
Great video! Are you planning on releasing the code used for it?
Thank you ! Yes, I'll make a github page for the channel, I'll put the link in the description when it's done.
Do you use ai voiceover? Great video btw
Thank you ! Indeed the voiceover is generated by an AI, but it is my own voice that I cloned. I'm using Elevenlabs. Did that annoy you or got you out of the video ? :(
This video is both informative and visually appealing. Thanks!
Many thanks :)
Please create more videos!
Sure will do aha
Great work!
Thank you !