Hello all! In the video I made a comment about how the Key and Query matrices capture low and high level properties of the text. After reading some of your comments, I've realized that this is not true (or at least there's no clear reason for it to be true), and probably something I misunderstood while reading in different places in the literature and threads. Apologies for the error, and thank you to all who pointed it out! I've removed that part of the video.
another big mistake on "measure 3: scaled dot product" you wrote "divided by a length of a vector" which is incorrect. At the same time you divide by a number of dimentions in vector, which is correct. Please fix it to avoid confusion.
I have watched more than 10 videos trying to wrap my head around the paper, attention is all you need. This video is by far the best video. I have been trying to assess why it is so effective at explaining such a complex concept and why the concept is hard to understand in the first place. Serrano explains the concepts, step by step, without making any assumptions. It helps a great deal. He also used diagrams, showing animations along the way as he explains. As for the architecture, there are so many layers condense in to the architecture. It has obviously evolved over the years with multiple concepts interlaced into the attention mechanism. so it is important to break it down into the various architectures and take each one at a time - positional encoding, tokenization, embedding, feed forward, normalization, neural networks, the math behind it, vectors, query-key -values. etc. Each of these are architectures that need explaining, or perhaps a video of their own, before putting them together. I am not quite there yet but this has improved my understanding a great deal. Serrano, keep up your approach. I would like to see you cover other areas such as Transformer with human feedback, the new Qstar architecture etc. You break it down so well.
Thank you for such a thorough analysis! I do enjoy making the videos a lot, so I'm glad you find them useful. And thank you for the suggestions! Definitely RLHF and QStar are topics I'm interested in, so hopefully soon there'll be videos of those!
Did you also try reading the original Attention is All you Need paper, and if so, what was your experience? Was there too much jargon and math to understand?
@@blahblahsaurus2458 too much jargon obviously intended for those already Familiar with the concepts. The diagram appears upside down and not intuitive at all. Nobody has attempted to redraw the architecture diagram in the paper. It follows no particular convention at all.
This might be the best video on attention mechanisms on youtube right now. I really liked the fact that you explained matrix multplications with linear transformations. It brings a whole new level of understanding with respect to embedding space. Thanks a lot!!
That is what many disseminators lack: explaining things with the mathematical foundations. I understand that it is difficult to do so. However, you did it, and in an amazing way. The way you explained the linear transformation was epic. Thank you.
I am so grateful that there are people like Luis Serrano who present incredibly complex material in a clear way. It must be an incredible job. I noticed Mr. Serrano very positively in Udacity. Just by reading the original papers, it is unlikely for “normal people” to understand such material. Many, many thanks!
This is unequivocally the best introduction to Transformers and Attention Mechanisms on the entire internet. Luis Serrano has guided me all the way from Machine Learning to Deep Learning and onto Large Language Models, maximizing the entropy of my AI thinking, allowing for limitless possibilities.
this is definitely the best explained video for attention model, the original paper sucks because there is not intuition at all, just simple words and crazy math equations that I don't know what it's doing
things don't suck just because you are not able to understand them. w/o the original paper, there would be no neccesity for this video, as the content wouldn't "exist"
are you kidding me ? seriously ? Lol some UA-camrs thinks that if they use fancy words they are good at teaching. but you're totally different man. you've cleared all of my confusions. Thanks man
This is one of the best explanations I have seen. Making complex things simple is an art and Serrano is a master in that. I saw first video of Serrano on RNN a few years back and really got impressed by his way of teaching. Keep it up Serrano! We need more people like you to help the students..
I really like how you're using these concrete examples and combining them with visuals. These really help build an intuition on what's actually happening. It's definitely a lot easier for people to consume than struggling with reading academic papers, constantly looking things up, and feeling frustrated and unsure. Please keep creating content like this!
These are the best videos so far I saw to understand how Transformer / LLM works. Thank you. I really like maths but it is good that you keep math simple that one don't loose the overview. You really have a talent to explain complex things in a simple way. Greets from Switzerland
Math is not my strong suit, but you made these mathematical concepts so clear with all the visual animations and your concise descriptions. Thank you so much for the hard work and making this content freely accessible to us!
simply , the best video on attention is all you need. Tried to understand it from different videos, blogs paper itself , couldn't understand close enough to what i understood from this video. It clarified almost all the questions i had, except for few which i think will be clarified in next video. You have amazing teaching skills , kudos to you man
Thanks for sharing your knowledge freely. I have been waiting patiently. You add a different perspective that we appreciate. Looking forward to the 3rd video. Thank you!
The best explanation I've seen so far! Really cool to see how much closer the field is getting to understanding those models instead of being so abstract thanks to people like you, Luis! :)
I studied linear algebra during the day on Coursera and watch UA-cam videos at night on state of the art machine learning. I’m amazed by how fast you learn with Luis. I’ve learned everything I was curious about. Thank you!
What a flawed youtube algorithm , that it showed this Gem after so many over complicated videos of attention, every student should understand attention from THIS VIDEO!
This is such a detailed and informative explanation of Transformer models! I appreciate the effort put into breaking down complex concepts with visuals and examples. Keep up the great work!
Best explanation of attention on the internet hands down. Finally someone who explains the 'why' in the internals of the transormer. Thank you good sir.
This is the best video that I have seen about the concept of attention! (I have seen more than 10 videos but none of them was like this.) Thank you so much! I am waiting for the next videos that you have promised! You are doing a great job!
I had to read this research paper for my Intro to AI class and it's obviously written for people who already have a lot of background knowledge in this field. so being a newbie I was so lost lol. Thanks for breaking it down and making it easy to understand!
Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊
This is one of best video I've come across to understand embeddings, attention. Looking forward to more such explanations which can simplify such complex mechanisms in AI world. Thanks for your efforts
Amazing video. pushed forward my understanding of attention by quite a few steps and helped me build an intuition for what’s happening under the hood. Eagerly waiting for the next one
This video has the best explanations of QKV matrices and linear layers among the other resources i ve come across. I don't know why but people seem not interested in explaining whats really happening with each action we take which results in loads of vague points. Yet, the video could ve been further improved with more concrete examples and numbers. Thank you.
I've been watching over 10 of the Transformers architecture tutorial videos, This one is so far the most intuitive way to understand it! really good work! yeah, Natural language processing is a hard topic, This tutorial is kind of revealed the black boxe from the large language model.
Great explanation. I was waitinig for this after your first video on attention mechanism! Your are so talented in explaining things in easily understandable ways! Thank you for the effort put into this and keep up the great work!
@SerranoAcademy If you want to come to the same notation as in the mentioned paper, Q times K_transpose, than the orange is the query and the phone is the key here. The you calculate q times Q times K_transpose times key_transpose (as mentioned in the paper) Remark: the paper uses "sequences", described as a "row vectors". However, usually one uses column vectors. Using row vectors, the linear transformation is a left multiplication a times A and the dot product is written as a times b_transpose. Using column vectors, the linear transformation is A times a and the dot product is written as a_transpose times b. This, in my opinion, is the standard notation, e.g. to write Ax = b and not xA=b.
Great video series! Thanks you! That helped a ton 🙂 One small remark: the concept of the "length" of a vector that you use here confused me. Here, I guess you take the point of view of a programmer: len(vector) outputs the number of dimensions of the vector. However, for a mathematician, the length of a vector is its norm or also called magnitude (square root of x^2 + y^2).
@SerranoAcademy At 13:23, you show a matrix-vector multiplication with a column-vector (rows of the table times columns of the vector) by right-multiplication. On the right side, maybe you could use, additionally to "is sent to", the icon "orange' (orange prime). This would show the multiplication in a clearer way Remark: you use a matrix-vector multiplication here (using a row of the matrix and the words as a column on the right of the matrix). If you use row vectors, the the word vector should be placed horizontally on the left of the matrix and in the explanation, a column of the matrix has to be used. The result is then a row vector again (maybe a bit hard to sketch)
HI Luis. Thank you for this video. I'm sure, this is a very good way to explain this complex topic, but I just won't get this into my brain. I'm currently doing the Math for Machine Learning specialization on Coursera and brushing up my algebra and calculus skills that are way to low. In any case, you made me getting involved into this and now I will grind through it till I make it. I'm sure the pain will become less and the fog will lighten up. 😊
if vectors are scaled to length 1, dot product = cosine similarity softmax is chosen to deal with negative values of vectors. -ve means smaller softmax so lesser contribution of that word to this word's embedding continue from 13:00
Thank you @Serrano.Academy, very useful video. The only thing that is a bit misleading is around 24:50, where Q,K are implied to be multiplied with the word embeddings to produce the cosine distance, when in fact the embediings are included in Q,K. I guess you are using Wq, Wk interchangeably with Q,K for simplicity.
Thank You very much sir. I am so pleased by the way you teach. Alhumdulillah. Thank GOD. However, I was unable to grasp the key, query, values part. Thank You Very Much
Hello all! In the video I made a comment about how the Key and Query matrices capture low and high level properties of the text. After reading some of your comments, I've realized that this is not true (or at least there's no clear reason for it to be true), and probably something I misunderstood while reading in different places in the literature and threads.
Apologies for the error, and thank you to all who pointed it out! I've removed that part of the video.
No worries. It might help to pin this comment to the top. Thanks a lot for the video.
Thanks for note. That comment actually sounds very reasonable to me. If I understand this right, keys and querys help to determine the context.
another big mistake on "measure 3: scaled dot product" you wrote "divided by a length of a vector" which is incorrect. At the same time you divide by a number of dimentions in vector, which is correct. Please fix it to avoid confusion.
@@masatoedamura184 confused me as well
I have watched more than 10 videos trying to wrap my head around the paper, attention is all you need. This video is by far the best video. I have been trying to assess why it is so effective at explaining such a complex concept and why the concept is hard to understand in the first place. Serrano explains the concepts, step by step, without making any assumptions. It helps a great deal. He also used diagrams, showing animations along the way as he explains. As for the architecture, there are so many layers condense in to the architecture. It has obviously evolved over the years with multiple concepts interlaced into the attention mechanism. so it is important to break it down into the various architectures and take each one at a time - positional encoding, tokenization, embedding, feed forward, normalization, neural networks, the math behind it, vectors, query-key -values. etc. Each of these are architectures that need explaining, or perhaps a video of their own, before putting them together. I am not quite there yet but this has improved my understanding a great deal. Serrano, keep up your approach. I would like to see you cover other areas such as Transformer with human feedback, the new Qstar architecture etc. You break it down so well.
Thank you for such a thorough analysis! I do enjoy making the videos a lot, so I'm glad you find them useful.
And thank you for the suggestions! Definitely RLHF and QStar are topics I'm interested in, so hopefully soon there'll be videos of those!
Did you also try reading the original Attention is All you Need paper, and if so, what was your experience? Was there too much jargon and math to understand?
Agree, an excellelt öööököööööööövnp
@@blahblahsaurus2458 too much jargon obviously intended for those already Familiar with the concepts. The diagram appears upside down and not intuitive at all. Nobody has attempted to redraw the architecture diagram in the paper. It follows no particular convention at all.
Absolutely ❤
This is the best description of Keys, Query, and Values I have ever seen across the internet. Thank you.
This might be the best video on attention mechanisms on youtube right now. I really liked the fact that you explained matrix multplications with linear transformations. It brings a whole new level of understanding with respect to embedding space. Thanks a lot!!
Thank you so much! I enjoy seeing things pictorially, especially matrices, and I'm glad that you do too!
This is really great, thanks a lot!
That is what many disseminators lack: explaining things with the mathematical foundations. I understand that it is difficult to do so. However, you did it, and in an amazing way. The way you explained the linear transformation was epic. Thank you.
I am so grateful that there are people like Luis Serrano who present incredibly complex material in a clear way. It must be an incredible job. I noticed Mr. Serrano very positively in Udacity. Just by reading the original papers, it is unlikely for “normal people” to understand such material. Many, many thanks!
This is unequivocally the best introduction to Transformers and Attention Mechanisms on the entire internet. Luis Serrano has guided me all the way from Machine Learning to Deep Learning and onto Large Language Models, maximizing the entropy of my AI thinking, allowing for limitless possibilities.
💯 agree. Everything else is utter BS by comparison. I’ve never tipped someone $10 for a video before this one ❤
this is definitely the best explained video for attention model, the original paper sucks because there is not intuition at all, just simple words and crazy math equations that I don't know what it's doing
things don't suck just because you are not able to understand them. w/o the original paper, there would be no neccesity for this video, as the content wouldn't "exist"
are you kidding me ? seriously ? Lol some UA-camrs thinks that if they use fancy words they are good at teaching. but you're totally different man. you've cleared all of my confusions. Thanks man
you explain very well Luis. Thank you. It's HARD to explain complicated topics in a way people can easily understand. You do it very well.
Thank you! :)
This is one of the best explanations I have seen. Making complex things simple is an art and Serrano is a master in that. I saw first video of Serrano on RNN a few years back and really got impressed by his way of teaching. Keep it up Serrano! We need more people like you to help the students..
I really like how you're using these concrete examples and combining them with visuals. These really help build an intuition on what's actually happening. It's definitely a lot easier for people to consume than struggling with reading academic papers, constantly looking things up, and feeling frustrated and unsure.
Please keep creating content like this!
Just the Keys and Queries section is worth the watch! I have been scratching my head on this for an entire month!
Thank you! :)
This really is one of the best videos explaining the purpose of K, Q, V. The illustrations provide a window into the math behind the concepts.
These are the best videos so far I saw to understand how Transformer / LLM works. Thank you.
I really like maths but it is good that you keep math simple that one don't loose the overview.
You really have a talent to explain complex things in a simple way.
Greets from Switzerland
I haven't seen a better video explaining Attention. Thanks a ton for your time and effort. God bless.
Math is not my strong suit, but you made these mathematical concepts so clear with all the visual animations and your concise descriptions. Thank you so much for the hard work and making this content freely accessible to us!
simply , the best video on attention is all you need. Tried to understand it from different videos, blogs paper itself , couldn't understand close enough to what i understood from this video. It clarified almost all the questions i had, except for few which i think will be clarified in next video. You have amazing teaching skills , kudos to you man
this is absolutely the best video that clearly illustrate and explains why we need v,k,q in attention. Bravo!
Honestly you are the best content creator for learning Machine learning and Deep learning in a visual and intuitive way
This is one of the best videos on attention and w,k,v so far.Thank you for a detailed explanation
One of the Best video on Attention. Such a complex subject been taught in a simple manner.Thank u!
This video is, without a doubt, the best video on transformers and attention that I have ever seen.
A professor here - preparing for my couse and tryng to find an easier way to talk about these ideas. I learned a lot! Thank you!
Thanks for sharing your knowledge freely. I have been waiting patiently. You add a different perspective that we appreciate. Looking forward to the 3rd video. Thank you!
Thank you! So glad you like the videos!
The best ever videos on transformers in the internet. You are the best teacher!
The best explanation I've seen so far! Really cool to see how much closer the field is getting to understanding those models instead of being so abstract thanks to people like you, Luis! :)
Absolutely the best set of videos explaining the most discussed topic. Thank you!!
Best video explaining what the query, key, and value matrices are! You saved my day.
this is the best video i have seen on attention model. Even after reading through so many articles it was not intuitively clear but now it is!! thanks
Amazing explanation of very difficult concepts. The best explanation I have found on the topic so far.
Simply the best explanation on this subject.Crystal clear .Thank you
I studied linear algebra during the day on Coursera and watch UA-cam videos at night on state of the art machine learning. I’m amazed by how fast you learn with Luis. I’ve learned everything I was curious about. Thank you!
Thank you, it’s an honor to be part of your learning journey! :)
What a flawed youtube algorithm , that it showed this Gem after so many over complicated videos of attention, every student should understand attention from THIS VIDEO!
This is the best video for people trying to understand basic knowledge about transformer, thank you so much ^^
This is the best video I’ve seen on this topic. Well done sir
This is truly the best video explaining each stage of a transformer, thanks man
Please continue making videos. You're the best teacher on this planet.
MAN! I have no words! Your channel is priceless! thank you for everything!!!
This is such a detailed and informative explanation of Transformer models! I appreciate the effort put into breaking down complex concepts with visuals and examples. Keep up the great work!
One of the best explanations I have ever watched
really thanks for this video , i am a stu in China , and none of my teachers teach me this clearly.
Finally! This is the best from the tons of videos/articles I saw/read.
Thank you for your work!
La mejor explicación que he visto sobre los Transformers. Gracias!
I really liked the way you showed the motivation behind softmax function. i was blown away. thanks a lot Serrano!
Best explanation of attention on the internet hands down. Finally someone who explains the 'why' in the internals of the transormer.
Thank you good sir.
This is the best video that I have seen about the concept of attention! (I have seen more than 10 videos but none of them was like this.) Thank you so much! I am waiting for the next videos that you have promised! You are doing a great job!
I had to read this research paper for my Intro to AI class and it's obviously written for people who already have a lot of background knowledge in this field. so being a newbie I was so lost lol. Thanks for breaking it down and making it easy to understand!
Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊
Thank you for the great tutorial. This is the clearest explanation I have found so far.
This is one of best video I've come across to understand embeddings, attention. Looking forward to more such explanations which can simplify such complex mechanisms in AI world. Thanks for your efforts
Very instructive and mind-opening on a difficult topic. Thanks
Sir , You are a Blessing to New Learners like me , Thank You , Big Respect.❤
Most intuitive explanation for QKV, as someone with only an elementary understanding of linear algebra.
Amazing video. pushed forward my understanding of attention by quite a few steps and helped me build an intuition for what’s happening under the hood. Eagerly waiting for the next one
Thanks!
This is one of the best explanations of Q, K & V I've heard!
This probably is “the best video “ on this topic
This video has the best explanations of QKV matrices and linear layers among the other resources i ve come across. I don't know why but people seem not interested in explaining whats really happening with each action we take which results in loads of vague points. Yet, the video could ve been further improved with more concrete examples and numbers. Thank you.
This is the best video I had seen explaining attention mechanism. Keep up the good work!
Awesome . You explained everything very well. It made life easy for me.
12:30 attention mechanism finding similarity (scaled dot product or cosine similarity) between each word in the sentence and every other word
Thanks!
Thank you so much for your kindness @BrikeshKumar987 !
Thank you so much !! I watched several video and none could explain the concept so well
Thanks, I'm so glad you enjoyed it! Lemme know if you have suggestions for more topics to cover!
Super clear ! Great video !!
by far the best explanation, Thanks for sharing!
Now whenever I watch Serrano's video, I first like it and the start watching it coz I know the video will gonna be outstanding as always.
This is great videos with clarity! on Keys, Query, and Values. Thank you
I'm going to try implement self-attention and multi-head attention myself, thanks so much for doing this guide!
I've been watching over 10 of the Transformers architecture tutorial videos, This one is so far the most intuitive way to understand it! really good work! yeah, Natural language processing is a hard topic, This tutorial is kind of revealed the black boxe from the large language model.
Amazing explanation. I am a professional pedagogue and this is stellar work
"This step is called softmax" . 😮😮😮
Today I understood why softmax is used. Such a beautiful function. And such a great way to demonstrate it.
amazing video. that's what i looking for. I need to know mathematical background to understand what is happening behind. thank you sir!
OMG this is so well explained! Thank you so much for the tutorials!
The best explanation l've ever seen about the attention mechanism, amazing
It was fascinating to me, I searched a lot for a math explained which didn't find thanks for this
Please do more😅 with more complex ones
Best Video to get clear understanding of transformers
Excellent job! Please continue making videos that breakdown the math.
Great explanation. I was waitinig for this after your first video on attention mechanism! Your are so talented in explaining things in easily understandable ways! Thank you for the effort put into this and keep up the great work!
Best video on this topic so far!
The best video I have ever watched about this!
Damn! There's no better video to understand Attention than this!!
Thank you, really good job on the visualization! They make the process really understandable.
@SerranoAcademy
If you want to come to the same notation as in the mentioned paper, Q times K_transpose, than the orange is the query and the phone is the key here. The you calculate q times Q times K_transpose times key_transpose (as mentioned in the paper)
Remark: the paper uses "sequences", described as a "row vectors". However, usually one uses column vectors. Using row vectors, the linear transformation is a left multiplication a times A and the dot product is written as a times b_transpose. Using column vectors, the linear transformation is A times a and the dot product is written as a_transpose times b. This, in my opinion, is the standard notation, e.g. to write Ax = b and not xA=b.
Great video series! Thanks you! That helped a ton 🙂
One small remark: the concept of the "length" of a vector that you use here confused me. Here, I guess you take the point of view of a programmer: len(vector) outputs the number of dimensions of the vector. However, for a mathematician, the length of a vector is its norm or also called magnitude (square root of x^2 + y^2).
@SerranoAcademy
At 13:23, you show a matrix-vector multiplication with a column-vector (rows of the table times columns of the vector) by right-multiplication. On the right side, maybe you could use, additionally to "is sent to", the icon "orange' (orange prime). This would show the multiplication in a clearer way
Remark: you use a matrix-vector multiplication here (using a row of the matrix and the words as a column on the right of the matrix). If you use row vectors, the the word vector should be placed horizontally on the left of the matrix and in the explanation, a column of the matrix has to be used. The result is then a row vector again (maybe a bit hard to sketch)
Thanks a lot for this. I always got terrified of the maths that might be there but the way you explained it all made it seem really easy ❤
Best video so far on this topic
Today i have understood attention mechanism better than never before
Your video is the best of all time!!!!!!!!!!! Beter than MIT course
Thank you so much! the image at 24:29 made this whole concept click immediately.
Wow!!! Now, I understand attention mechanism.
I did not understand a bit when learning about this in an expensive AI course
HI Luis. Thank you for this video. I'm sure, this is a very good way to explain this complex topic, but I just won't get this into my brain. I'm currently doing the Math for Machine Learning specialization on Coursera and brushing up my algebra and calculus skills that are way to low. In any case, you made me getting involved into this and now I will grind through it till I make it. I'm sure the pain will become less and the fog will lighten up. 😊
Amazing video! Took my intuition to the next level.
if vectors are scaled to length 1, dot product = cosine similarity
softmax is chosen to deal with negative values of vectors. -ve means smaller softmax so lesser contribution of that word to this word's embedding
continue from 13:00
I haven't blinked my eyes for a sec. 👏🏼🙏🏼
Thank you @Serrano.Academy, very useful video. The only thing that is a bit misleading is around 24:50, where Q,K are implied to be multiplied with the word embeddings to produce the cosine distance, when in fact the embediings are included in Q,K. I guess you are using Wq, Wk interchangeably with Q,K for simplicity.
Great video finally understood all the concepts in their context
Thank You very much sir.
I am so pleased by the way you teach. Alhumdulillah. Thank GOD.
However, I was unable to grasp the key, query, values part.
Thank You Very Much
Very well explained. Got a bit closer to understanding attention models.
God sent video. So incredibly well put
This is powerful yet so simple. Thanks