The math behind Attention: Keys, Queries, and Values matrices

Поділитися
Вставка

КОМЕНТАРІ • 367

  • @SerranoAcademy
    @SerranoAcademy  Рік тому +68

    Hello all! In the video I made a comment about how the Key and Query matrices capture low and high level properties of the text. After reading some of your comments, I've realized that this is not true (or at least there's no clear reason for it to be true), and probably something I misunderstood while reading in different places in the literature and threads.
    Apologies for the error, and thank you to all who pointed it out! I've removed that part of the video.

    • @tantzer6113
      @tantzer6113 Рік тому +3

      No worries. It might help to pin this comment to the top. Thanks a lot for the video.

    • @chrisw4562
      @chrisw4562 10 місяців тому

      Thanks for note. That comment actually sounds very reasonable to me. If I understand this right, keys and querys help to determine the context.

    • @masatoedamura184
      @masatoedamura184 Місяць тому +1

      another big mistake on "measure 3: scaled dot product" you wrote "divided by a length of a vector" which is incorrect. At the same time you divide by a number of dimentions in vector, which is correct. Please fix it to avoid confusion.

    • @itaipeleg8994
      @itaipeleg8994 17 днів тому

      @@masatoedamura184 confused me as well

  • @JTedam
    @JTedam Рік тому +84

    I have watched more than 10 videos trying to wrap my head around the paper, attention is all you need. This video is by far the best video. I have been trying to assess why it is so effective at explaining such a complex concept and why the concept is hard to understand in the first place. Serrano explains the concepts, step by step, without making any assumptions. It helps a great deal. He also used diagrams, showing animations along the way as he explains. As for the architecture, there are so many layers condense in to the architecture. It has obviously evolved over the years with multiple concepts interlaced into the attention mechanism. so it is important to break it down into the various architectures and take each one at a time - positional encoding, tokenization, embedding, feed forward, normalization, neural networks, the math behind it, vectors, query-key -values. etc. Each of these are architectures that need explaining, or perhaps a video of their own, before putting them together. I am not quite there yet but this has improved my understanding a great deal. Serrano, keep up your approach. I would like to see you cover other areas such as Transformer with human feedback, the new Qstar architecture etc. You break it down so well.

    • @SerranoAcademy
      @SerranoAcademy  Рік тому +6

      Thank you for such a thorough analysis! I do enjoy making the videos a lot, so I'm glad you find them useful.
      And thank you for the suggestions! Definitely RLHF and QStar are topics I'm interested in, so hopefully soon there'll be videos of those!

    • @blahblahsaurus2458
      @blahblahsaurus2458 9 місяців тому +1

      Did you also try reading the original Attention is All you Need paper, and if so, what was your experience? Was there too much jargon and math to understand?

    • @visahonkanen7291
      @visahonkanen7291 8 місяців тому

      Agree, an excellelt öööököööööööövnp

    • @JTedam
      @JTedam 8 місяців тому +3

      @@blahblahsaurus2458 too much jargon obviously intended for those already Familiar with the concepts. The diagram appears upside down and not intuitive at all. Nobody has attempted to redraw the architecture diagram in the paper. It follows no particular convention at all.

    • @TomChenyangJI
      @TomChenyangJI 2 місяці тому

      Absolutely ❤

  • @computersciencelearningina7382
    @computersciencelearningina7382 10 місяців тому +10

    This is the best description of Keys, Query, and Values I have ever seen across the internet. Thank you.

  • @Rish__01
    @Rish__01 Рік тому +117

    This might be the best video on attention mechanisms on youtube right now. I really liked the fact that you explained matrix multplications with linear transformations. It brings a whole new level of understanding with respect to embedding space. Thanks a lot!!

    • @SerranoAcademy
      @SerranoAcademy  Рік тому +7

      Thank you so much! I enjoy seeing things pictorially, especially matrices, and I'm glad that you do too!

    • @maethu
      @maethu Рік тому +1

      This is really great, thanks a lot!

    • @JosueHuaman-oz4fk
      @JosueHuaman-oz4fk 9 місяців тому

      That is what many disseminators lack: explaining things with the mathematical foundations. I understand that it is difficult to do so. However, you did it, and in an amazing way. The way you explained the linear transformation was epic. Thank you.

  • @olivergrau4660
    @olivergrau4660 3 місяці тому +2

    I am so grateful that there are people like Luis Serrano who present incredibly complex material in a clear way. It must be an incredible job. I noticed Mr. Serrano very positively in Udacity. Just by reading the original papers, it is unlikely for “normal people” to understand such material. Many, many thanks!

  • @Aaron洪希仁
    @Aaron洪希仁 Рік тому +26

    This is unequivocally the best introduction to Transformers and Attention Mechanisms on the entire internet. Luis Serrano has guided me all the way from Machine Learning to Deep Learning and onto Large Language Models, maximizing the entropy of my AI thinking, allowing for limitless possibilities.

    • @JonMasters
      @JonMasters 9 місяців тому +2

      💯 agree. Everything else is utter BS by comparison. I’ve never tipped someone $10 for a video before this one ❤

  • @fcx1439
    @fcx1439 10 місяців тому +30

    this is definitely the best explained video for attention model, the original paper sucks because there is not intuition at all, just simple words and crazy math equations that I don't know what it's doing

    • @nbtble
      @nbtble 3 місяці тому +2

      things don't suck just because you are not able to understand them. w/o the original paper, there would be no neccesity for this video, as the content wouldn't "exist"

  • @mushfikurahmaan
    @mushfikurahmaan 3 місяці тому +3

    are you kidding me ? seriously ? Lol some UA-camrs thinks that if they use fancy words they are good at teaching. but you're totally different man. you've cleared all of my confusions. Thanks man

  • @23232323rdurian
    @23232323rdurian Рік тому +13

    you explain very well Luis. Thank you. It's HARD to explain complicated topics in a way people can easily understand. You do it very well.

  • @Bramsmelodic
    @Bramsmelodic Місяць тому

    This is one of the best explanations I have seen. Making complex things simple is an art and Serrano is a master in that. I saw first video of Serrano on RNN a few years back and really got impressed by his way of teaching. Keep it up Serrano! We need more people like you to help the students..

  • @__redacted__
    @__redacted__ Рік тому +4

    I really like how you're using these concrete examples and combining them with visuals. These really help build an intuition on what's actually happening. It's definitely a lot easier for people to consume than struggling with reading academic papers, constantly looking things up, and feeling frustrated and unsure.
    Please keep creating content like this!

  • @channel8048
    @channel8048 Рік тому +4

    Just the Keys and Queries section is worth the watch! I have been scratching my head on this for an entire month!

  • @dekasthiti
    @dekasthiti 9 місяців тому

    This really is one of the best videos explaining the purpose of K, Q, V. The illustrations provide a window into the math behind the concepts.

  • @joelegger2570
    @joelegger2570 Рік тому +10

    These are the best videos so far I saw to understand how Transformer / LLM works. Thank you.
    I really like maths but it is good that you keep math simple that one don't loose the overview.
    You really have a talent to explain complex things in a simple way.
    Greets from Switzerland

  • @decryptifi2265
    @decryptifi2265 Місяць тому

    I haven't seen a better video explaining Attention. Thanks a ton for your time and effort. God bless.

  • @leilanifrost771
    @leilanifrost771 9 місяців тому

    Math is not my strong suit, but you made these mathematical concepts so clear with all the visual animations and your concise descriptions. Thank you so much for the hard work and making this content freely accessible to us!

  • @ravindra1607
    @ravindra1607 2 місяці тому

    simply , the best video on attention is all you need. Tried to understand it from different videos, blogs paper itself , couldn't understand close enough to what i understood from this video. It clarified almost all the questions i had, except for few which i think will be clarified in next video. You have amazing teaching skills , kudos to you man

  • @MrMacaroonable
    @MrMacaroonable Рік тому +1

    this is absolutely the best video that clearly illustrate and explains why we need v,k,q in attention. Bravo!

  • @Chill_Magma
    @Chill_Magma Рік тому +1

    Honestly you are the best content creator for learning Machine learning and Deep learning in a visual and intuitive way

  • @kranthikumar4397
    @kranthikumar4397 9 місяців тому

    This is one of the best videos on attention and w,k,v so far.Thank you for a detailed explanation

  • @snehotoshbanerjee1938
    @snehotoshbanerjee1938 Рік тому +2

    One of the Best video on Attention. Such a complex subject been taught in a simple manner.Thank u!

  • @aravind_selvam
    @aravind_selvam Рік тому

    This video is, without a doubt, the best video on transformers and attention that I have ever seen.

  • @shuang7877
    @shuang7877 7 місяців тому

    A professor here - preparing for my couse and tryng to find an easier way to talk about these ideas. I learned a lot! Thank you!

  • @ChujiOlinze
    @ChujiOlinze Рік тому +5

    Thanks for sharing your knowledge freely. I have been waiting patiently. You add a different perspective that we appreciate. Looking forward to the 3rd video. Thank you!

  • @cachegrk
    @cachegrk 6 місяців тому

    The best ever videos on transformers in the internet. You are the best teacher!

  • @WhatsAI
    @WhatsAI Рік тому +9

    The best explanation I've seen so far! Really cool to see how much closer the field is getting to understanding those models instead of being so abstract thanks to people like you, Luis! :)

  • @ganapathysubramaniam
    @ganapathysubramaniam Рік тому +1

    Absolutely the best set of videos explaining the most discussed topic. Thank you!!

  • @puwanatsangkhapreecha7847
    @puwanatsangkhapreecha7847 7 місяців тому

    Best video explaining what the query, key, and value matrices are! You saved my day.

  • @nikhilbelure
    @nikhilbelure 5 місяців тому

    this is the best video i have seen on attention model. Even after reading through so many articles it was not intuitively clear but now it is!! thanks

  • @redmond2582
    @redmond2582 Рік тому +1

    Amazing explanation of very difficult concepts. The best explanation I have found on the topic so far.

  • @lengooi6125
    @lengooi6125 11 місяців тому +1

    Simply the best explanation on this subject.Crystal clear .Thank you

  • @danherman212nyc
    @danherman212nyc 9 місяців тому

    I studied linear algebra during the day on Coursera and watch UA-cam videos at night on state of the art machine learning. I’m amazed by how fast you learn with Luis. I’ve learned everything I was curious about. Thank you!

    • @SerranoAcademy
      @SerranoAcademy  9 місяців тому +1

      Thank you, it’s an honor to be part of your learning journey! :)

  • @shubha07m
    @shubha07m 3 місяці тому

    What a flawed youtube algorithm , that it showed this Gem after so many over complicated videos of attention, every student should understand attention from THIS VIDEO!

  • @_ncduy_
    @_ncduy_ 9 місяців тому

    This is the best video for people trying to understand basic knowledge about transformer, thank you so much ^^

  • @brianburton6669
    @brianburton6669 3 місяці тому

    This is the best video I’ve seen on this topic. Well done sir

  • @MrSikesben
    @MrSikesben 11 місяців тому

    This is truly the best video explaining each stage of a transformer, thanks man

  • @rohitchan007
    @rohitchan007 Рік тому +5

    Please continue making videos. You're the best teacher on this planet.

  • @bzaruk
    @bzaruk Рік тому

    MAN! I have no words! Your channel is priceless! thank you for everything!!!

  • @rachadlakis1
    @rachadlakis1 6 місяців тому

    This is such a detailed and informative explanation of Transformer models! I appreciate the effort put into breaking down complex concepts with visuals and examples. Keep up the great work!

  • @alnouralharin
    @alnouralharin 9 місяців тому

    One of the best explanations I have ever watched

  • @王禹博-s8i
    @王禹博-s8i 2 місяці тому

    really thanks for this video , i am a stu in China , and none of my teachers teach me this clearly.

  • @alexrypun
    @alexrypun Рік тому

    Finally! This is the best from the tons of videos/articles I saw/read.
    Thank you for your work!

  • @andresfeliperiostamayo7307
    @andresfeliperiostamayo7307 7 місяців тому

    La mejor explicación que he visto sobre los Transformers. Gracias!

  • @gauravruhela007
    @gauravruhela007 7 місяців тому

    I really liked the way you showed the motivation behind softmax function. i was blown away. thanks a lot Serrano!

  • @kennethm.4998
    @kennethm.4998 Місяць тому +1

    Best explanation of attention on the internet hands down. Finally someone who explains the 'why' in the internals of the transormer.
    Thank you good sir.

  • @SeyyedMohammadLoghmanDastgheyb

    This is the best video that I have seen about the concept of attention! (I have seen more than 10 videos but none of them was like this.) Thank you so much! I am waiting for the next videos that you have promised! You are doing a great job!

  • @shannawallace7855
    @shannawallace7855 Рік тому +1

    I had to read this research paper for my Intro to AI class and it's obviously written for people who already have a lot of background knowledge in this field. so being a newbie I was so lost lol. Thanks for breaking it down and making it easy to understand!

  • @johnschut164
    @johnschut164 Рік тому

    Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊

  • @chrisw4562
    @chrisw4562 10 місяців тому

    Thank you for the great tutorial. This is the clearest explanation I have found so far.

  • @antraprakash2562
    @antraprakash2562 11 місяців тому

    This is one of best video I've come across to understand embeddings, attention. Looking forward to more such explanations which can simplify such complex mechanisms in AI world. Thanks for your efforts

  • @Vercoquin64
    @Vercoquin64 Місяць тому

    Very instructive and mind-opening on a difficult topic. Thanks

  • @deveshnandan323
    @deveshnandan323 9 місяців тому

    Sir , You are a Blessing to New Learners like me , Thank You , Big Respect.❤

  • @iantanwx
    @iantanwx 6 місяців тому

    Most intuitive explanation for QKV, as someone with only an elementary understanding of linear algebra.

  • @guitarcrax127
    @guitarcrax127 Рік тому +3

    Amazing video. pushed forward my understanding of attention by quite a few steps and helped me build an intuition for what’s happening under the hood. Eagerly waiting for the next one

  • @RoyBassTube
    @RoyBassTube 7 місяців тому

    Thanks!
    This is one of the best explanations of Q, K & V I've heard!

  • @awinashjha
    @awinashjha Рік тому

    This probably is “the best video “ on this topic

  • @celilylmaz4426
    @celilylmaz4426 Рік тому

    This video has the best explanations of QKV matrices and linear layers among the other resources i ve come across. I don't know why but people seem not interested in explaining whats really happening with each action we take which results in loads of vague points. Yet, the video could ve been further improved with more concrete examples and numbers. Thank you.

  • @vasanthakumarg4538
    @vasanthakumarg4538 Рік тому

    This is the best video I had seen explaining attention mechanism. Keep up the good work!

  • @subterraindia5761
    @subterraindia5761 4 місяці тому +1

    Awesome . You explained everything very well. It made life easy for me.

  • @mostinho7
    @mostinho7 Рік тому +1

    12:30 attention mechanism finding similarity (scaled dot product or cosine similarity) between each word in the sentence and every other word

  • @BrikeshKumar987
    @BrikeshKumar987 Рік тому +1

    Thanks!

    • @SerranoAcademy
      @SerranoAcademy  Рік тому

      Thank you so much for your kindness @BrikeshKumar987 !

  • @BrikeshKumar987
    @BrikeshKumar987 Рік тому

    Thank you so much !! I watched several video and none could explain the concept so well

    • @SerranoAcademy
      @SerranoAcademy  Рік тому

      Thanks, I'm so glad you enjoyed it! Lemme know if you have suggestions for more topics to cover!

  • @cooperwu38
    @cooperwu38 10 місяців тому +1

    Super clear ! Great video !!

  • @user-um4di5qm8p
    @user-um4di5qm8p 7 місяців тому

    by far the best explanation, Thanks for sharing!

  • @mayyutyagi
    @mayyutyagi 6 місяців тому

    Now whenever I watch Serrano's video, I first like it and the start watching it coz I know the video will gonna be outstanding as always.

  • @devmum2008
    @devmum2008 9 місяців тому +1

    This is great videos with clarity! on Keys, Query, and Values. Thank you

  • @0xSingletOnly
    @0xSingletOnly 11 місяців тому

    I'm going to try implement self-attention and multi-head attention myself, thanks so much for doing this guide!

  • @kylelau1329
    @kylelau1329 Рік тому

    I've been watching over 10 of the Transformers architecture tutorial videos, This one is so far the most intuitive way to understand it! really good work! yeah, Natural language processing is a hard topic, This tutorial is kind of revealed the black boxe from the large language model.

  • @brandonheaton6197
    @brandonheaton6197 Рік тому

    Amazing explanation. I am a professional pedagogue and this is stellar work

  • @Wise_Man_on_YouTube
    @Wise_Man_on_YouTube 10 місяців тому

    "This step is called softmax" . 😮😮😮
    Today I understood why softmax is used. Such a beautiful function. And such a great way to demonstrate it.

  • @tankado_ndakota
    @tankado_ndakota 7 місяців тому

    amazing video. that's what i looking for. I need to know mathematical background to understand what is happening behind. thank you sir!

  • @glacierxs6646
    @glacierxs6646 5 місяців тому

    OMG this is so well explained! Thank you so much for the tutorials!

  • @YahyaMohand-r7f
    @YahyaMohand-r7f Рік тому

    The best explanation l've ever seen about the attention mechanism, amazing

  • @alieskandarian5258
    @alieskandarian5258 11 місяців тому

    It was fascinating to me, I searched a lot for a math explained which didn't find thanks for this
    Please do more😅 with more complex ones

  • @sreelakshminarayanan.m6609
    @sreelakshminarayanan.m6609 8 місяців тому

    Best Video to get clear understanding of transformers

  • @danielmoore4311
    @danielmoore4311 Рік тому +1

    Excellent job! Please continue making videos that breakdown the math.

  • @lijunzhang2788
    @lijunzhang2788 Рік тому +1

    Great explanation. I was waitinig for this after your first video on attention mechanism! Your are so talented in explaining things in easily understandable ways! Thank you for the effort put into this and keep up the great work!

  • @knobbytrails577
    @knobbytrails577 Рік тому

    Best video on this topic so far!

  • @deniz517
    @deniz517 Рік тому

    The best video I have ever watched about this!

  • @debnath22026
    @debnath22026 5 місяців тому

    Damn! There's no better video to understand Attention than this!!

  • @TheMotorJokers
    @TheMotorJokers Рік тому +1

    Thank you, really good job on the visualization! They make the process really understandable.

  • @rollingstone1784
    @rollingstone1784 8 місяців тому +1

    @SerranoAcademy
    If you want to come to the same notation as in the mentioned paper, Q times K_transpose, than the orange is the query and the phone is the key here. The you calculate q times Q times K_transpose times key_transpose (as mentioned in the paper)
    Remark: the paper uses "sequences", described as a "row vectors". However, usually one uses column vectors. Using row vectors, the linear transformation is a left multiplication a times A and the dot product is written as a times b_transpose. Using column vectors, the linear transformation is A times a and the dot product is written as a_transpose times b. This, in my opinion, is the standard notation, e.g. to write Ax = b and not xA=b.

  • @Ludwighaffen1
    @Ludwighaffen1 Рік тому +3

    Great video series! Thanks you! That helped a ton 🙂
    One small remark: the concept of the "length" of a vector that you use here confused me. Here, I guess you take the point of view of a programmer: len(vector) outputs the number of dimensions of the vector. However, for a mathematician, the length of a vector is its norm or also called magnitude (square root of x^2 + y^2).

  • @rollingstone1784
    @rollingstone1784 8 місяців тому

    @SerranoAcademy
    At 13:23, you show a matrix-vector multiplication with a column-vector (rows of the table times columns of the vector) by right-multiplication. On the right side, maybe you could use, additionally to "is sent to", the icon "orange' (orange prime). This would show the multiplication in a clearer way
    Remark: you use a matrix-vector multiplication here (using a row of the matrix and the words as a column on the right of the matrix). If you use row vectors, the the word vector should be placed horizontally on the left of the matrix and in the explanation, a column of the matrix has to be used. The result is then a row vector again (maybe a bit hard to sketch)

  • @BABA-oi2cl
    @BABA-oi2cl Рік тому

    Thanks a lot for this. I always got terrified of the maths that might be there but the way you explained it all made it seem really easy ❤

  • @PeterGodek2
    @PeterGodek2 Рік тому

    Best video so far on this topic

  • @januaymagori4642
    @januaymagori4642 Рік тому +2

    Today i have understood attention mechanism better than never before

  • @li-pingho1441
    @li-pingho1441 Рік тому

    Your video is the best of all time!!!!!!!!!!! Beter than MIT course

  • @davidking545
    @davidking545 Рік тому

    Thank you so much! the image at 24:29 made this whole concept click immediately.

  • @MSGMSUSA
    @MSGMSUSA Рік тому

    Wow!!! Now, I understand attention mechanism.
    I did not understand a bit when learning about this in an expensive AI course

  • @MarkusEicher70
    @MarkusEicher70 Рік тому +2

    HI Luis. Thank you for this video. I'm sure, this is a very good way to explain this complex topic, but I just won't get this into my brain. I'm currently doing the Math for Machine Learning specialization on Coursera and brushing up my algebra and calculus skills that are way to low. In any case, you made me getting involved into this and now I will grind through it till I make it. I'm sure the pain will become less and the fog will lighten up. 😊

  • @joshuaohara7704
    @joshuaohara7704 Рік тому

    Amazing video! Took my intuition to the next level.

  • @epistemophilicmetalhead9454
    @epistemophilicmetalhead9454 7 місяців тому +1

    if vectors are scaled to length 1, dot product = cosine similarity
    softmax is chosen to deal with negative values of vectors. -ve means smaller softmax so lesser contribution of that word to this word's embedding
    continue from 13:00

  • @o.k.4599
    @o.k.4599 10 місяців тому

    I haven't blinked my eyes for a sec. 👏🏼🙏🏼

  • @SaeclumSolvet
    @SaeclumSolvet 4 місяці тому

    Thank you @Serrano.Academy, very useful video. The only thing that is a bit misleading is around 24:50, where Q,K are implied to be multiplied with the word embeddings to produce the cosine distance, when in fact the embediings are included in Q,K. I guess you are using Wq, Wk interchangeably with Q,K for simplicity.

  • @joehannes23
    @joehannes23 Рік тому

    Great video finally understood all the concepts in their context

  • @syedmustahsan4888
    @syedmustahsan4888 3 місяці тому +1

    Thank You very much sir.
    I am so pleased by the way you teach. Alhumdulillah. Thank GOD.
    However, I was unable to grasp the key, query, values part.
    Thank You Very Much

  • @pavangupta6112
    @pavangupta6112 Рік тому

    Very well explained. Got a bit closer to understanding attention models.

  • @Hiyori___
    @Hiyori___ 10 місяців тому

    God sent video. So incredibly well put

  • @saintcodded2918
    @saintcodded2918 10 місяців тому

    This is powerful yet so simple. Thanks