*CORRECTIONS* A big shoutout to the following awesome viewers for these 2 corrections: 1. @Henry Wang and @Holger Urbanek - At (10:28), "dk" is actually the hidden dimension of the Key matrix and not the sequence length. In the original paper (Attention is all you need), it is taken to be 512. 2. @JU PING NG - The result of concatenation at (14:58) is supposed to be 7 x 9 instead of 21 x 3 (that is to so that the concatenation of z matrices happens horizontally and not vertically). With this we can apply a nn.Linear(9, 5) to get the final 7 x 5 shape. Here are the timestamps associated with the concepts covered in this video: 0:00 - Recaps of Part 0 and 1 0:56 - Difference between Simple and Self-Attention 3:11 - Multi-Head Attention Layer - Query, Key and Value matrices 11:44 - Intuition for Multi-Head Attention Layer with Examples
Need to say this out loud, I saw Yannic Kilcher's video, read tonnes of materials on internet, went through atleast 7 playlists, and this is the first time I really understood the inner mechanism of Q, K and V vectors in transformers. You did a great job here
All 3 parts have been the best presentation I've ever seen of Transformers. Your step-by-step visualizations have filled in so many gaps left by other videos and blog posts. Thank you very much for creating this series.
Damn. This is exactly what a developer coming from other backgrounds need. Simple analogies for a rapid understanding. Thanks a ton. Keep uploadinggggggggggg plss
Agreed, very well done. You do a very good job of explaining difficult concepts to a non-industry developer (fyi I'm an accountant) without assuming a lot of prior knowledge. I look forward to your next video on masked decoders!!!
The important detail that set you apart from the other videos and websites is that not only did you provide the model's architecture with numerous formulas but you also demonstrated them in vectors and matrixes, successfully walked us through each complicated and trivial concept. You really did a good job!
As someone NOT in the field reading the Attention paper, after having watched DOZENS of videos on the topic this is the FIRST explanation that laid it out in an intuitive manner without leaving anything out. I don't know your background, but you are definitely a great teacher. Thank you.
Were you the one who wrote transformers in the fist place, because no one explained it like you did. This is undoubtfully the best info I have seen. I hope you please keep posting more videos. Thanks a lot.
This really is an excellent explanation. I had some sense that self-attention layers acted like a table of relationships between tokens, but only now do I have more sense of how the Query, Key, and Value mechanism actually works.
This is one of the best Transformer videos on UA-cam. I hope UA-cam always recommends this Value (V), aka video, as a first Key (K), aka Video Title, when someone uses the Query (Q) as "Transformer"!! 😄
I've been stuck for so long trying to get the Transformer Neural Networks and this is by far the best explanation ! The examples are so fun making it easier to comprehend. Thank you so much for you effort !
I'm currently reading a book about transformers and was scratching my head over the reason for the multi-headed attention architecture. Thank you so much for the clearest explanation yet that finally gave me this satisfying 💡-moment
Finally a video on transformers that actually makes sense. Not a single lecture video from any of the reputed universities managed to cover the topic with such brilliant clarity.
this channel needs more love (the way she explains is out of the box). I can say this because I have 4 years of experience in data science, she did a lot of hard work to get so much clarity in concepts (love from India)
Finally! You delivered me from long nights of searching for good explanations about transformers! It was awesome! I can't wait to see the part 3 and beyond!
Really love coming back to your videos and get a recap on multi layered attention and the transformers! Sometimes I need to make my own specialized attention layers for the dataset in question and sometimes i dunno it just helps to just listen to you talk about transformers and attention ! Really intuitive and helps me to break out of some weird loop of algorithm design I might have gotten myself stuck at. So thank you so so much :D
The best explanation I've ever seen of such a powerful architecture. I'm glad of having found this Joy after searching for positional encoding details while implementing a Transformer from scratch today. Valar Morghulis!
Probably the best explanation of transformers I’ve found online. Read the paper, watched Yannic’s video, some paper reading videos and a few others, the intuition is still missing. This connects the dots, keep up the great work!
Your attention to details and information structuring are just exceptional. The Avatar and GoT references on top were hilarious and make things perfect. You literally made a story out of complex deep learning concept(s). This is just brillant. You have such a beautiful mind (if you get the reference :D). Please consider making more videos like this, such a gift is truly precious. May the force be always with you. 🤘
I really like the fact that you ask questions within the video. In fact those are the same questions one has and first reading about transformers. Keep up the awesome work!
Such a fantastic and detailed yet digestible explanation. As others have said in the comments, other explanations leave so many gaps. Thank you for this gem!
This is the best resource for an intuitive understanding of transformers. I will without a doubt point everyone towards your video series. Thank you so much!
You are so good, thank you for breaking down a seemingly scary topic for all of us.The original paper requires lot of background to understand clearly, and not all have it. I personally felt lost. Such videos help a lot!
Thank you for putting so much effort in the visualization and awesome narration of these series. These are by far the best videos to explain transformers. You should do more of these videos. You certainly have a gift!
Thank you for watching! Yep! Back on it :) Would love to hear which topic/model/algorithm are you most wanting to see on this channel. Will try to cover it in the upcoming videos.
I went through many videos from Coursera, youtube, and some online blogs but none explained so clear about the Query, key, and values. You made my day.
Holy shit was this a good explanation! Other blogs literally copy what the paper states (which is kinda confusing), but you explained it in such a intuitive and fun way! Thats what I called talent!!
You are awesome!! I watched Yannic Kilcher's video first and was still confused by the paper, probably because there's so much detail skipped over in the paper and Kilcher's video. However, your video goes much slower and in depth so the explanations were simple to understand, and the whole picture makes sense now. Thank you!
Excellent examples and explanation. Don't shy away from using more examples of things that you love, this love shows and will translate to better work overall. Cheers!
*CORRECTIONS*
A big shoutout to the following awesome viewers for these 2 corrections:
1. @Henry Wang and @Holger Urbanek - At (10:28), "dk" is actually the hidden dimension of the Key matrix and not the sequence length. In the original paper (Attention is all you need), it is taken to be 512.
2. @JU PING NG
- The result of concatenation at (14:58) is supposed to be 7 x 9 instead of 21 x 3 (that is to so that the concatenation of z matrices happens horizontally and not vertically). With this we can apply a nn.Linear(9, 5) to get the final 7 x 5 shape.
Here are the timestamps associated with the concepts covered in this video:
0:00 - Recaps of Part 0 and 1
0:56 - Difference between Simple and Self-Attention
3:11 - Multi-Head Attention Layer - Query, Key and Value matrices
11:44 - Intuition for Multi-Head Attention Layer with Examples
Where's the first video?
@@amortalbeing Episode 0 can be found here - ua-cam.com/video/48gBPL7aHJY/v-deo.html
@@HeduAI thanks a lot really appreciate it:)
Awesome...So dk value is 3?
@@omkiranmalepati1645 d_k = embedding dimensions // number of heads
Need to say this out loud, I saw Yannic Kilcher's video, read tonnes of materials on internet, went through atleast 7 playlists, and this is the first time I really understood the inner mechanism of Q, K and V vectors in transformers. You did a great job here
This made my day :,)
True
Very intuitive explanation!
Totally agree with this comment
Yes, no other video actually explains what the actual input for these are
All 3 parts have been the best presentation I've ever seen of Transformers. Your step-by-step visualizations have filled in so many gaps left by other videos and blog posts. Thank you very much for creating this series.
This comment made my day :,) Thanks!
Me, too!
Definitely agree. These videos really crystallize a lot of knowledge, thanks for making this series!
ش
@@HeduAI absolutely awesome . You are the best.
Absolutely underrated, hands down one of the best explanations I've found on the internet
Damn. This is exactly what a developer coming from other backgrounds need.
Simple analogies for a rapid understanding.
Thanks a ton.
Keep uploadinggggggggggg plss
Agreed, very well done. You do a very good job of explaining difficult concepts to a non-industry developer (fyi I'm an accountant) without assuming a lot of prior knowledge. I look forward to your next video on masked decoders!!!
@@Xeneon341 Oh nice! Glad you enjoyed these videos! :)
The important detail that set you apart from the other videos and websites is that not only did you provide the model's architecture with numerous formulas but you also demonstrated them in vectors and matrixes, successfully walked us through each complicated and trivial concept. You really did a good job!
Best explanation ever on Transformers !!!
As someone NOT in the field reading the Attention paper, after having watched DOZENS of videos on the topic this is the FIRST explanation that laid it out in an intuitive manner without leaving anything out. I don't know your background, but you are definitely a great teacher. Thank you.
So glad to hear this :)
Were you the one who wrote transformers in the fist place, because no one explained it like you did. This is undoubtfully the best info I have seen. I hope you please keep posting more videos. Thanks a lot.
This comment made my day! :) Thank you.
This really is an excellent explanation. I had some sense that self-attention layers acted like a table of relationships between tokens, but only now do I have more sense of how the Query, Key, and Value mechanism actually works.
This is one of the best Transformer videos on UA-cam. I hope UA-cam always recommends this Value (V), aka video, as a first Key (K), aka Video Title, when someone uses the Query (Q) as "Transformer"!! 😄
😄
I've been stuck for so long trying to get the Transformer Neural Networks and this is by far the best explanation ! The examples are so fun making it easier to comprehend. Thank you so much for you effort !
Cheers!
Self-attention is a villain that has struck me for a long time. Your presentation has helped me to better understand this genius idea.
I'm currently reading a book about transformers and was scratching my head over the reason for the multi-headed attention architecture.
Thank you so much for the clearest explanation yet that finally gave me this satisfying 💡-moment
Finally a video on transformers that actually makes sense. Not a single lecture video from any of the reputed universities managed to cover the topic with such brilliant clarity.
this channel needs more love (the way she explains is out of the box). I can say this because I have 4 years of experience in data science, she did a lot of hard work to get so much clarity in concepts (love from India)
Thank you Rohtash! You made my day! :) धन्यवाद
I am just speechless, this is unbelievable! Bravo!
One of the best explanations on Attention in my opinion.
Finally! You delivered me from long nights of searching for good explanations about transformers! It was awesome! I can't wait to see the part 3 and beyond!
Thanks for this great feedback!
“Part 3 - Decoder’s Masked Attention” is out. Thanks for the wait. Enjoy! Cheers! :D
ua-cam.com/video/gJ9kaJsE78k/v-deo.html
best, best best explanation on transformer, you are adding so much value to the world.
Really love coming back to your videos and get a recap on multi layered attention and the transformers! Sometimes I need to make my own specialized attention layers for the dataset in question and sometimes i dunno it just helps to just listen to you talk about transformers and attention ! Really intuitive and helps me to break out of some weird loop of algorithm design I might have gotten myself stuck at. So thank you so so much :D
The best explanation I've ever seen of such a powerful architecture. I'm glad of having found this Joy after searching for positional encoding details while implementing a Transformer from scratch today. Valar Morghulis!
Valar Dohaeris my friend ;)
Amazing video, showing how the attention matrix is created and what values it assumes is really awesome. Thanks!
Probably the best explanation of transformers I’ve found online. Read the paper, watched Yannic’s video, some paper reading videos and a few others, the intuition is still missing. This connects the dots, keep up the great work!
thanks for these great videos! The visualizations and extra explanations on details are perfect!
Better than the best Berkeley professor! Amazing!
This is quite literally the best attention mechanism video out there guys
Literally the best series on transformers. Even clearer than statquest and luis serrano who also make things very clear
I don't have words to describe how much these videos saved me, thank you!
3 days, 16 different videos, and your video "just made sense". You just earned a subscriber and a life-long well-wisher.
Spectacular explanation! This channel is sooo underrated!
Its one of the best explainations of Transformers. Just mind blowing.
Your attention to details and information structuring are just exceptional. The Avatar and GoT references on top were hilarious and make things perfect. You literally made a story out of complex deep learning concept(s). This is just brillant.
You have such a beautiful mind (if you get the reference :D). Please consider making more videos like this, such a gift is truly precious. May the force be always with you. 🤘
This is literally the best explanation for self-attention I have seen anywhere! Really loved the videos!
I really like the fact that you ask questions within the video. In fact those are the same questions one has and first reading about transformers. Keep up the awesome work!
Great explanation and visualization, thanks a lot. Please keep making such helpful videos.
Incredibly well explained! Thanks a lot
These videos are really incredible. Thank you!
I just repeat what everybody else said: these videos are the best! thank you for the effort
This is by far the best video to understand Attention Networks. Awesome work !!
The best explanation of attention models on the earth!
Awesome analogy and explanation !
This is the best explanation of transformers architecture with a lot of basic analogy ! Thanks a lot!
My goodness, you have talent as a teacher!! :-) This builds a very good intuition about what is going on. Very impressed. Subscribed!
This is how the self-attention should be explained.
Such a fantastic and detailed yet digestible explanation. As others have said in the comments, other explanations leave so many gaps. Thank you for this gem!
You are the best😄😄, This is THE Best explanation I have ever seen on UA-cam for Transformer Model, Thank you so much for this video.
brilliant explanation , your chanel deserve way more ATTENTION.
Spot on analysis. Many thanks for the clear explanation.
The MOST MOST MOST MOST ..........................useful and THE BEST video ever on Multi head attention........Thanks a lot for your work
So glad you liked it! :)
The best video I've ever seen for explaining transformer.
These videos are amazing, thank you so much! Best explanation so far!!
This is the best resource for an intuitive understanding of transformers. I will without a doubt point everyone towards your video series. Thank you so much!
it is impressive, you explain so complicated topics in a vivid and easy way!!!
Hands down the best series I've found on the web about transformers. Thank you
I've watched many video series about transformers, this is by far the best.
Outstanding explanation and well delivered, both verbally and with the graphics. I look forward to the next in this series
“Part 3 - Decoder’s Masked Attention” is out. Thanks for the wait. Enjoy! Cheers! :D
ua-cam.com/video/gJ9kaJsE78k/v-deo.html
Bravo! After watching dozens of other explainer videos, I can finally grasp the reason for multi-headed attention. Excellent video. Please make more!
i love this vid so much, now i understand whole multi head self attention thing very clearly thanks!
This is very clear and well-thought out, thanks!
Thank you so much! This is by far the clearest explanation that I've ever seen on this topic
Amazing explanation, thank you so much!
This is an absolute gem of a video.
Great explanation! Thank you so much!
To visualize the matrices helped me to understand better transformers.
Again, thank you very much!
Awesome and hats off to your conceptual knowledge level understanding
Finally I understood the concept of query, key and value. Thank you.
You are so good, thank you for breaking down a seemingly scary topic for all of us.The original paper requires lot of background to understand clearly, and not all have it. I personally felt lost. Such videos help a lot!
Blown away by your explanation . You are a great teacher.
Thank you for putting so much effort in the visualization and awesome narration of these series. These are by far the best videos to explain transformers. You should do more of these videos. You certainly have a gift!
Thank you for watching! Yep! Back on it :) Would love to hear which topic/model/algorithm are you most wanting to see on this channel. Will try to cover it in the upcoming videos.
you are amazing. ive watched other videos and read materials but nothing compares to your videos
This is a great work, thank you.
keep uploading. 👏
I went through many videos from Coursera, youtube, and some online blogs but none explained so clear about the Query, key, and values. You made my day.
Glad to hear this Shubhesh :)
Wow. Amazing explanation! You have a gift for explaining quite complex material succinctly.
Thanks Andrew! Cheers! :D
No way. This video is insane!! The most accurate and excellent explanation of self-attention mechanism. Subscribed to your channel!
wow!! The best transformers series ever. Thanks a ton for making these
Thanks for posting, by far this is the most didactic Transformer presentation I've ever seen. AMAZING!
Hats off to you for this incredible tutorial! 🎩🚀
Thank you for the video. Best explanation i've seen.
I can't believe how good this is.
Thank you! this is so well explained
The best video I've ever watched, thank you so much
Hand down the best transformer explanation. Thank you very much!
Educational + Entertaining. Nice examples and figures. Loved it!
Holy shit was this a good explanation! Other blogs literally copy what the paper states (which is kinda confusing), but you explained it in such a intuitive and fun way! Thats what I called talent!!
Hands down the best video on transformers I have seen! Thank you for taking your time to make this video.
This is the most intuitive explanation of transformers that I've seen. Thank you hedu! I'm in awe. Liked & subbed.
So glad to know this! :)
You are awesome!! I watched Yannic Kilcher's video first and was still confused by the paper, probably because there's so much detail skipped over in the paper and Kilcher's video. However, your video goes much slower and in depth so the explanations were simple to understand, and the whole picture makes sense now. Thank you!
That is what I'm looking for, for 3 days now! Thanks a lot!
Thank you, amazing explanation
This video is GOLD, it should be everywere! Thank you so much for doing such an amazing job 😍😍
Excellent examples and explanation. Don't shy away from using more examples of things that you love, this love shows and will translate to better work overall. Cheers!
have been trying to understand this topic for a long time , glad I found this video now
This is the best explanation I've ever seen!
Ohh why I get to know this channel now . This channel is criminally underrated!!
The best video I ever had! Thank you very much!
Wow. Just wow !! This video needs to be in the top most position when searched for content on transformers and their explanation
So glad to see this feedback! :)
Hands down the best explanation of the use of Query, Key and Value matrices. Great video with an easy example to understand.
Excellent series. Looking forward to Part 3!
“Part 3 - Decoder’s Masked Attention” is out. Thanks for the wait. Enjoy! Cheers! :D
ua-cam.com/video/gJ9kaJsE78k/v-deo.html
You have a gift for explanations... Best I've seen anywhere online. Superb.