Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!

Поділитися
Вставка
  • Опубліковано 1 тра 2024
  • Transformer Neural Networks are the heart of pretty much everything exciting in AI right now. ChatGPT, Google Translate and many other cool things, are based on Transformers. This StatQuest cuts through all the hype and shows you how a Transformer works, one-step-at-a time.
    NOTE: If you're interested in learning more about Backpropagation, check out these 'Quests:
    The Chain Rule: • The Chain Rule
    Gradient Descent: • Gradient Descent, Step...
    Backpropagation Main Ideas: • Neural Networks Pt. 2:...
    Backpropagation Details Part 1: • Backpropagation Detail...
    Backpropagation Details Part 2: • Backpropagation Detail...
    If you're interested in learning more about the SoftMax function, check out:
    • Neural Networks Part 5...
    If you're interested in learning more about Word Embedding, check out: • Word Embedding and Wor...
    If you'd like to learn more about calculating similarities in the context of neural networks and the Dot Product, check out:
    Cosine Similarity: • Cosine Similarity, Cle...
    Attention: • Attention for Neural N...
    If you'd like to support StatQuest, please consider...
    Patreon: / statquest
    ...or...
    UA-cam Membership: / @statquest
    ...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
    statquest.org/statquest-store/
    ...or just donating to StatQuest!
    paypal: www.paypal.me/statquest
    venmo: @JoshStarmer
    Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
    / joshuastarmer
    0:00 Awesome song and introduction
    1:26 Word Embedding
    7:30 Positional Encoding
    12:53 Self-Attention
    23:37 Encoder and Decoder defined
    23:53 Decoder Word Embedding
    25:08 Decoder Positional Encoding
    25:50 Transformers were designed for parallel computing
    27:13 Decoder Self-Attention
    27:59 Encoder-Decoder Attention
    31:19 Decoding numbers into words
    32:23 Decoding the second token
    34:13 Extra stuff you can add to a Transformer
    #StatQuest #Transformer #ChatGPT

КОМЕНТАРІ • 1,1 тис.

  • @statquest
    @statquest  9 місяців тому +41

    To learn more about Lightning: lightning.ai/
    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

    • @NeoShameMan
      @NeoShameMan 9 місяців тому +2

      Personally I find it more clear to link embedding to hidden class of words, i use character sheets as a netaphor, because what attention does is not looking at word but the description in its sheet, with each attention head focusing on different part of the description, which mean a word representation have multiple attention on different hidden class. Then at the end we look at the sheets transformed at each layer to find the next word. That also allows to explain multimodality, ie make sure image input and text input share the same description sheet.

    • @statquest
      @statquest  9 місяців тому +2

      @@NeoShameMan Interesting.

    • @MrMehrd
      @MrMehrd 9 місяців тому +1

      Transformers needs more than one video, each part(multi H attention, word embeding(sin&cosine similarity),training &…)
      I was waiting for long to reach stat of art.

    • @statquest
      @statquest  9 місяців тому +2

      @@MrMehrd I thought about doing it that way - and that was the original plan. But my video on Attention convinced me that most people would rather have a single video that has everything in it all at once. However, I've provided links in this video's description to full length videos on each topic you are interested in.

    • @NeoShameMan
      @NeoShameMan 9 місяців тому +2

      @@statquest oh you mentioned that you don't know why this number of head, that's hardware optimization, ie they can be split into gpu or memory pool or reduce bandwidth, such that they can parralelized or compute sequentially on resource starved machine.

  • @jediknight120
    @jediknight120 9 місяців тому +727

    As a Computer Science professor who teaches Machine Learning, this is probably my most anticipated video ever. I regularly use your videos to brush up on/review ML concepts myself and recommend them to my students as study aids. You explain these concepts in the clear, straightforward way that I aspire to. Thank you!

    • @statquest
      @statquest  9 місяців тому +41

      Thank you! BAM! :)

    • @yizhou6877
      @yizhou6877 9 місяців тому +2

      Me too!

    • @Daigandar
      @Daigandar 9 місяців тому +11

      @@statquest our data analysis professor also uses your videos as references and recommends you almost every session haha. i learned about this amazing channel from him.

    • @statquest
      @statquest  9 місяців тому +6

      @@Daigandar That's awesome! BAM! :)

    • @cienciadedados
      @cienciadedados 9 місяців тому +1

      Well said. I do the same!

  • @alefalfa
    @alefalfa 9 місяців тому +264

    Its kinda hilarious that StatQuest videos give the impression they were menat for 5 year olds, yet are exploring legitimately complex topics. No jargon, no overcomplicated diagrams. Josh really tries to explain things and not show off his supirior understanding of neural networks. Thanks Josh!

  • @aayushsmarten
    @aayushsmarten 9 місяців тому +185

    This is the complet-est, precious-est, pur-est, brilliant-est video ever. Can't imagine how much work you've put into creating these illustrations. It's just brilliant. Hats off.

    • @statquest
      @statquest  9 місяців тому +5

      Wow, thank you!

    • @lumiey
      @lumiey 8 місяців тому +5

      Did you just tokenize your comment?

    • @statquest
      @statquest  8 місяців тому +2

      @@lumiey I'm not sure I understand.

    • @lumiey
      @lumiey 8 місяців тому +6

      @@statquest He just separated words like complet, est, precious, est, pur, est... like tokenizer does (e.g. following -> follow, ing)

    • @aayushsmarten
      @aayushsmarten 8 місяців тому +2

      @@lumieyHaha

  • @AmitBhor
    @AmitBhor 9 місяців тому +109

    22:12 8 heads because 8 gpu clusters are common and hence can compute in parallel . The embedding dimension are 512 and that leaves each head has 64 query size. Great video 👍

    • @statquest
      @statquest  9 місяців тому +13

      Awesome!

    • @TheTimtimtimtam
      @TheTimtimtimtam 9 місяців тому +3

      Thank you

    • @jakob2946
      @jakob2946 3 місяці тому +3

      Does the second part mean that each head only gets a portion of the embeddings?

    • @oliviervangoethem9365
      @oliviervangoethem9365 Місяць тому

      @@jakob2946 curious aswell, I looked it up and it seems that its not true, every head is applied to all dimensions of the embedding. This also makes more sense to me since the word embeddings should be looked at as a whole. please correct me if I'm wrong

    • @tekrunner987
      @tekrunner987 Місяць тому +2

      @@oliviervangoethem9365 I don't know about more recent transformers, but in the initial architecture each attention head is applied to a projection of input embeddings, with reduced dimensionality (in the original "Attention is all you need" paper: embeddings have a dimension of 512, and each of the 8 attention heads has a dimension of 64). The reason for this is spelled out in the original paper: "Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this."

  • @midolion8510
    @midolion8510 9 місяців тому +76

    I can't imagine how much effort it took for ai scientists to make this model. I really admire your illustration 😀

    • @statquest
      @statquest  9 місяців тому +2

      Thank you so much 😀!

  • @bobbymath2813
    @bobbymath2813 4 місяці тому +10

    How a model like this was created is just beyond me. There’s so many different moving parts. You could write a whole book on the fully-connected network alone. Add in all the other stuff? Wow.
    Thank you, Josh, for explaining this so well!

    • @statquest
      @statquest  4 місяці тому +3

      Thanks! It's a little easier to understand how this model was created in the first place if you follow the whole Neural Networks playlist. You'll see how things changed, one step at a time, to eventually end up with a transformer: ua-cam.com/video/CqOfi41LfDw/v-deo.html

    • @bobbymath2813
      @bobbymath2813 4 місяці тому +1

      @@statquest Thanks Josh! I’ll check out that playlist. What you’re doing is so special to the world, and humanity is so indebted.

  • @linhdinh136
    @linhdinh136 9 місяців тому +25

    Thanks, Josh, for keeping your promise to make a video about Transformers. I learned a lot and truly appreciate your effort in explaining this concept. I just placed an order to buy your book and made a donation to support the channel. I'm looking forward to more content on Machine Learning and hope to see videos about GPT and BERT models. ♥

    • @statquest
      @statquest  9 місяців тому +3

      Thank you so much!!! I really appreciate your support (TRIPLE BAM!!!). I hope to do the GPT video soon, but we'll see - the timeline is a little out of my control right now.

  • @nilson_001
    @nilson_001 8 місяців тому +28

    Thanks to your engaging visualization and clear explanation, I've grasped the Stanford CS224n course! Your content is neatly condensed but doesn't miss a thing. It's like you've taken all the complex concepts and served them up on a platter. Triple Bam!

    • @statquest
      @statquest  8 місяців тому +3

      Congratulations! TRIPLE BAM! :)

  • @user-rs9zs8kj7o
    @user-rs9zs8kj7o 6 місяців тому +5

    You're the only person on social media that can explain such complicated topics in an easy to understand manner. Keep up!

    • @statquest
      @statquest  6 місяців тому

      Thanks, will do!

  • @gvascons
    @gvascons 9 місяців тому +40

    And so we reach the state-of-art!! Congrats Josh :D

  • @fgfgdfgfgf
    @fgfgdfgfgf 8 місяців тому +5

    I've been looking for tutorial about transformers for a long time. This is the smoothest tutorial. It does not hide any complexities(making me confident that I actually understand the concept instead of its dumbed down version for mortals that won't end up ever using the knowledge), but also does not get lost while explaining those complexities and clearly calls out what else I can learn about to understand the side concepts better. Super !!!

    • @statquest
      @statquest  8 місяців тому

      Thank you very much! :)

  • @coolsai
    @coolsai 9 місяців тому +10

    BEST EVER VIDEO ABOUT CHAT GPT! I watched many videos but this video is just BAM!

  • @VeloFX
    @VeloFX 8 місяців тому +14

    The explanations in your videos are incredibly precise and efficient at the same time. There is nothing better to watch when learning any ML topic! 👍

    • @statquest
      @statquest  8 місяців тому

      Thank you very much! :)

  • @darshagarwal8307
    @darshagarwal8307 9 місяців тому +1

    As always I loved the video! Thank you so much for producing such easy, fun and clear videos explaining these concepts. Always looking forward to more!

    • @statquest
      @statquest  9 місяців тому +1

      You are so welcome!

  • @MinChitXD
    @MinChitXD 4 місяці тому +3

    I've just learned machine learning for a month, my major is a pure business student. I've been working as a Data Analyst for 2 months as the internship and I believe machine learning will be essential if I want to go further in this industry. Out of all tutorials videos I've watched, your videos brought up the clearest and most concise concepts for me to understand. All the videos walked me through from the series of neural network, back propagation, cross entropy with backward propagation, recurrent, LSTM and convolutional neural network, lastly, this video. Really appreciate for your understandings and amazing storytelling through your videos, your contents always make me eager to keep learning machine learning myself. Thanks a lot

    • @statquest
      @statquest  4 місяці тому +1

      Thank you very much! I'm glad my videos are helpful.

  • @maximeentsi2205
    @maximeentsi2205 9 місяців тому +8

    I try-harded deeply to understand transformers in few mouths ago, I can say that this video is a must have.
    Thank you Josh

    • @statquest
      @statquest  9 місяців тому +1

      Glad it was helpful!

  • @urazc5917
    @urazc5917 6 місяців тому +2

    This video is a treasure in a world where is explained in 2 minutes. Thank you Josh!

    • @statquest
      @statquest  6 місяців тому

      Thank you very much!

  • @CharlesPayne
    @CharlesPayne Місяць тому +2

    Not to be a buzz kill, but I suffered a bad Traumatic brain injury in my late 40's after being hit by an SUV while stopped on a motorcycle. I'm blessed i survived . At the time my job dealt with engineering and architecting IT solutions and I was looking forward to advancing my career into AI and Machine Learning. I was in a coma for a while and I lost lots of what i used to know. I know have Learning disabilities and memory issues. I have improved some over the last years, but If I'm being honest with myself, I wouldn't want me as an engineer, so I'm trying to move into management. I'm glad I ran across these videos . I purchased the .pdf books and notebooks today and I can honestly say they are well worth it. Josh, I'm so glad You created this material. Your books and notebooks etc.. are helping me slowly understand complex topics in hopes that I can stay relevant and continue to advance my career. Thanks again!

    • @statquest
      @statquest  Місяць тому

      TRIPLE BAM!!! Thank you so much for supporting StatQuest and I wish you the best as you continue to learn about ML and Data Science! :)

  • @Joy-dn8yz
    @Joy-dn8yz 9 місяців тому +9

    words cannot describe how happy I am to be able to watch this video. You really helped me with my studies. It is you who made me so interested in AI and think that I am actuaaly able to understand what is going on. Thank you for your simplified models. They really help when larning more complex stuff on this or that theme. But everytime there's a theme I do not know, the first thing I do is go to statquest. Thank you, Josh!

    • @statquest
      @statquest  9 місяців тому

      Hooray!!! Thank you very much!

  • @kurtosis4573
    @kurtosis4573 8 місяців тому +6

    I just finished watching almost all the videos on this channel and i have to say that this is probably the best place to learn stats and machine learning. I also bought the ML book and it captures the essence of the style of teaching on this channel really well and is very handy to go back and quickly look up some details. You are doing great work!

    • @statquest
      @statquest  8 місяців тому +1

      Wow, thanks!

    • @meirgoldenberg5638
      @meirgoldenberg5638 8 місяців тому

      Which book?

    • @statquest
      @statquest  8 місяців тому

      @@meirgoldenberg5638 I think he is referring to my book, The StatQuest Illustrated Guide to Machine Learning at statquest.org/statquest-store/

  • @limitlesslife7536
    @limitlesslife7536 Місяць тому +2

    you are a blessing for anyone who is a visual learner. You have the gift to be able to explain complex topics in easy way.

  • @jordanmuniz6167
    @jordanmuniz6167 5 днів тому +1

    Your videos have to be the best instance of teaching I have ever seen! Thank you for the amazing work!

  • @kosukenishio9670
    @kosukenishio9670 9 місяців тому +17

    For slowpokes like me: The example assumes total vacabulary size of 4 for each language. Thanks Josh for providing some of the best content on the subject! Finally the K, Q, V made clear sense

  • @tdv8686
    @tdv8686 9 місяців тому +4

    OMG, I waited for it for so long!!, thank you, Josh!

  • @emanelsheikh6344
    @emanelsheikh6344 8 місяців тому +2

    I've searched a lot about the transformers but seriously this is the best explanation I've ever got. Amazing!❤

  • @michaelongmk
    @michaelongmk 9 місяців тому +1

    Love these Quests! Kudos for explaining these complex data science concepts in layman terms but also with great depth ❤

  • @user-ls9zb3dy1i
    @user-ls9zb3dy1i 9 місяців тому +3

    Your neural networks playlist including this video gave me an intuitive understanding of transformers in less than a week which is something that would have taken an entire semester otherwise. I stumbled onto them while searching for a better understanding of Q,K,V, which everyone seems to say is as simple as querying a database…but what does that even mean?? Your explanations are brilliant, and I will be sharing with everyone I know who wants to learn more about this topic. I look forward to future videos. Thank you!

    • @statquest
      @statquest  9 місяців тому

      Thank you very much!!! I really appreciate it.

  • @harryspeaks
    @harryspeaks 7 місяців тому +9

    Definitely the clearest walkthru of Transformer. It's very good that you put heavy emphasis on the parallelizability of Transformer since IMO it is the most important feature that made Transformer so useful

  • @apah
    @apah 9 місяців тому +2

    Man oh man the crazy timing .. I just watched your video on attention yesterday !! TRIPPLE BAAAAM
    your rock josh thanks :D

  • @tangchunxin979
    @tangchunxin979 9 місяців тому +1

    The videos are really fantastic!!! First time ever that helps me understand every single detail!! Thank you!!! Plz keep posting!!

    • @statquest
      @statquest  9 місяців тому

      Thank you! Will do!

  • @TudorTatar-ny8zw
    @TudorTatar-ny8zw 9 місяців тому +3

    The positional encoding explanation truly was a BAM!

  • @isseym8592
    @isseym8592 6 місяців тому +6

    As a computer science student getting into the field of NLP, I really can't thank you enough for making a video that breaks down Transformer like this. Our uni doesn't go in depth about NLP related topics and with a very brief explanation they do, the uni expects us to have a full understanding about NLP. I can't thank you enough!

  • @fgh680
    @fgh680 9 місяців тому +1

    The most AWESOME 36 MINUTES - What an explanation of Transformers!

    • @statquest
      @statquest  9 місяців тому

      Thank you very much!!! BAM! :)

  • @prathameshdinkar2966
    @prathameshdinkar2966 4 місяці тому +1

    So nicely explained! I have searched for "how transformers work" but no one on youtube explained with both concept and math! Keep the good work going 😁👍

    • @statquest
      @statquest  4 місяці тому

      Glad you liked it!

  • @aanchaldogra
    @aanchaldogra 8 місяців тому +2

    I owe my data science job to so many beautiful people on youtube, you are one of them.
    Thank you

  • @vinny2688
    @vinny2688 9 місяців тому +3

    THIS is what I've been waiting for!

  • @rikki146
    @rikki146 9 місяців тому +5

    That is a lot of stuff in a single video!! For those who are wondering, ChatGPT is a decoder only neural network, and the main diff between an encoder and a decoder is that a decoder uses masked attention - thus ChatGPT is essentially an autoregressive model. Notice how ChatGPT generates a response in sequential order, from left to right. Anyway, good stuff!

    • @statquest
      @statquest  9 місяців тому +10

      Yep - I'd like to make a GPT video just to highlight the explicit use of masking (the self attention in the decoder in this video used masking implicitly).

    • @technicalbranch99
      @technicalbranch99 9 місяців тому +4

      @@statquest Please do that video soon :) BAM

  • @shaktisd
    @shaktisd 4 місяці тому +1

    One of the best explanation of encoder / decoder architecture. Esp. the self attention part. I really liked the way you colored Q,K, V to keep track of how things are moving . Looking forward to more such videos

    • @statquest
      @statquest  4 місяці тому +1

      Thanks! I've also got a video on Decoder-Only Transformers: ua-cam.com/video/bQ5BoolX9Ag/v-deo.html and I'm working on one that shows the matrix algebra (color coded) of how these things are computed.

    • @shaktisd
      @shaktisd 4 місяці тому

      @@statquest are all these topics covered in your book ? Would love to read them in printed format

    • @statquest
      @statquest  4 місяці тому +1

      @@shaktisd They'll be in my next book.

    • @shaktisd
      @shaktisd 4 місяці тому +1

      @@statquest looking forward to the next edition.

  • @nobiaaaa
    @nobiaaaa 6 місяців тому +3

    Only videos like this can have "clearly explained" in the title.

  • @wd8222
    @wd8222 9 місяців тому +9

    Best explanation I found in the whole Internet ! although I admit I needed 2 full turns. well done Josh !

    • @statquest
      @statquest  9 місяців тому +2

      Thanks! - Yes, this video packs in a ton of information, but I couldn't figure out any other way to make it work.

  • @REV_Pika
    @REV_Pika 29 днів тому +1

    its amazing how you make a 2 hours lecture in just 30 mins and explain it way better, after finishing this video and realizing what I just grasped, its mind blowing how you can make such complicated subject easy to understand. thank you very much!

  • @chandraprakash934
    @chandraprakash934 9 місяців тому +1

    This video is amazing just as other videos of yours ! Thank you for spreading knowledge ! Eagerly waiting for upcoming videos.

  • @tupaiadhikari
    @tupaiadhikari 8 місяців тому +4

    Prof. Starmer, Thank You very much. You are an inspiration to all the aspiring Machine Learning Enthusiasts. Respect and Gratitude from India. #RESPECT

    • @statquest
      @statquest  8 місяців тому

      Thank you very much!

  • @williamflinchbaugh6478
    @williamflinchbaugh6478 8 місяців тому +3

    Great video! I'd love to see a pytorch + lightning tutorial on transformers similar to the LSTM video!

    • @statquest
      @statquest  8 місяців тому +1

      That's the plan!

  • @TekeshwarHirwani
    @TekeshwarHirwani 5 місяців тому +1

    Best video on Transformer I have seen in UA-cam! Amazing ! huge respect for you

    • @statquest
      @statquest  5 місяців тому +1

      Thank you so much 😀!

  • @srikanthganta7626
    @srikanthganta7626 5 місяців тому +1

    Thank you for such amazing illustrations! HOW I WISH I HAD THIS DURING MY STUDIES, BUT I'M JUST GLAD I GET TO LEARN THESE AS A WORKING PROFESSIONAL. THANK YOU SO MUCH FOR ALL THE CONTENT YOU MAKE. I'M SURE YOU MAKE THOUSANDS OF LIVES BETTER. YOU'RE TRULY AN INSPIRATION JOSH!

  • @berkk1993
    @berkk1993 9 місяців тому +4

    I've spent a good deal of time studying attention, the critical concept behind transformers. Don't anticipate a natural understanding of the Q, K, and V parameters. We aren't entirely certain about their function; we can only hypothesize. They could still function effectively even if we used four parameters instead of three. One crucial point to remember is that our intuitive understanding of neural networks (NNs) is far from complete. The matrices for Q, K, and V aren't static; they're learned via backpropagation over lengthy training periods, thus changing over time. As a result, it's not as certain as mathematical operations like 1+1=2. The same applies to the head count in transformers; we can't definitively state whether eight is a good number or not. We don't fully grasp what each head is precisely doing; we can only speculate.

    • @GreenCowsGames
      @GreenCowsGames 9 місяців тому

      In visual transformers, we do understand what each head does. I guess heads trained on language are more difficult to interpret for us.

    • @nich.1918
      @nich.1918 9 місяців тому

      @@GreenCowsGames no, we don’t know that they do.

  • @patriciachang5079
    @patriciachang5079 9 місяців тому +2

    You really explaining these concepts in a clear way! Will you do more explanation video on statistic like Cox model for survival ? Thanks! :)

    • @statquest
      @statquest  9 місяців тому

      I'll keep that in mind.

  • @AntiLawyer0
    @AntiLawyer0 5 місяців тому

    The best video that explains Transformer I've ever seen. Thanks for your contribution!

    • @statquest
      @statquest  5 місяців тому

      Thank you very much! :)

  • @andrewdouglas9559
    @andrewdouglas9559 7 місяців тому +1

    I don't know how I'd learn DataScience/ML without this channel. Thanks so much for doing what you do!

  • @vidbot4037
    @vidbot4037 9 місяців тому +5

    HE HAS DONE IT YET AGAIN!

  • @matthewhaythornthwaite9910
    @matthewhaythornthwaite9910 7 місяців тому +4

    Thanks Josh, another great video, I’ve been following your channel for years now and your videos have massively helped me to change career so huge thanks.
    On to the transformer network, there’s something about the positional encoding that makes me feel a little uneasy. It feels we’ve gone through great effort to train a word embedding model that can cluster similar words together in n-dimensional word embedding space (where n can be very large, often 1,000).
    By then applying positional encoding before our self-attention, whilst you very clearly explained with your example how important adding this information to the model is, seems to me to mess up all the effort we put into word embedding to get similar words clustered together. The word pizza, instead of being positioned in the same place can now jump around word/positional embedding space. Instead of one representation of pizza in space, it can now move around to be in many different positions, and not move locally around its own 'area' but because we add the positional encoding to the word embedding, scaled equally, it can jump around a great deal of space. To me it would seem adding this much freedom to where the word pizza can be represented in space would make it much much harder to train the model. Is my understanding correct or is there something I’m missing?

    • @statquest
      @statquest  7 місяців тому +1

      I have a couple of thoughts on this. Maybe I should make a short video called "some thoughts about positional encoding". Anyway, here they are...
      Thought #1: Remember the positional encoding is fixed, so the word embedding values have to take them into account when training. For example, since all of the positional encoding value are between -1 and 1, it is possible that the word embedding values will have larger magnitudes and thus, not move around a lot when position is added to them.
      Thought #2: Because the periods of the squiggles get larger for larger embedding positions, after about the 20th position, the position encoding values end up alternating 1 and 0 (in other words, after the 20th position, the position encoding values are 1010101....) and it is in that space, from the 20th position to the 512th position (usually word embeddings have 512 or more positions) that the word embeddings are really learned, and that the first 20 positions are mostly just for position encoding.

    • @matthewhaythornthwaite9910
      @matthewhaythornthwaite9910 7 місяців тому +1

      @@statquest Ah ok yeh that makes a lot of sense, thanks so much for taking the time to reply!

    • @matthewhaythornthwaite9910
      @matthewhaythornthwaite9910 6 місяців тому

      I’ve been having some additional thoughts on this and think I may have another reason (or rather an example) why adding positional encoding to the word embedding vectors makes sense, Josh if you read this, feel free to shoot it down! Take the following sentence: “The weather is bad, but my mood is good”. In this sentence the first “is” refers to the weather, whereas the second "is" refers to my mood. Without positional encoding and only word embedding, the vector for “is” being passed into the attention unit will be the same for the two instances of the word in the sentence. If we don’t use masked self-attention and compare the word “is” to every word including itself in the sentence, then the output of the word “is” in the self-attention unit I believe should be the same for both instances. Therefore, the unit will struggle to successfully differentiate the relative meaning of the two words. By adding in positional encoding prior to the self-attention unit, we’re suddenly adding context to the word. The second “is” comes straight after the word “mood”, therefore the position vector we’re adding to each of the two words should be similar. However, because the word “weather” comes 6 words before the second “is”, the positional vector we add will be quite different. Presumably this difference helps a self-attention unit to differentiate the relative meanings of the two instances of the word “is”.

    • @statquest
      @statquest  6 місяців тому

      @@matthewhaythornthwaite9910 That all sounds reasonable to me! BAM! :)

    • @luckusters8568
      @luckusters8568 Місяць тому

      @@matthewhaythornthwaite9910 Another reason why you would want to add positional encoding instead of doing something else is that it preserves the dimensionality of the encoding. Imagine a theoretical encoding which is not added (like a one-hot encoding for each sequence location), and some linear (or non-linear for that matter) transform to combine word embedding and positinonal encoding. This is great in the sense that we do not polute the embedding space with "arbitrary" offsets, but now our input sequence has to be of a fixed shape. Addition of orthogonal sinusoids guarrantees a non-parametric, dimensionality preserving encoding which does not fix the number of inputs we can give to the network.
      By the way, I think there is an analogy between adding positional encoding to embeddings and adding residual/skip connections to network outputs. Imagine that we have a network that is represented by the function f(x) and we have some target function F(x) which we want the network to learn. Imagine now that we modify our network to compute the function f(x) = h(x) + x (where "h(x)" is the network in front of the skip connection "h(x) + x"). Here too we polute the output space of h(x) with the values of x. However the network f can still learn F, so long as the network h(x) learns the function h(x) = F(x) - x (such that f(x) = h(x) + x = F(x) - x + x = F(x)).
      I suppose for positional encoding something similar holds (altough it probably has to learn a much more difficult internal pattern), where the network is f(E(x)+q) learns to associate word embedding values E(x) which are "convolved" by some known offsets q and probably learns to deconvolve E(x) and q (into some abstract representation).Given that E(x) + q may in theory be (nearly) non-unique (i.e. E(x_1) + q_1 = approx E(x_2) + q_2) it might still be possible for the network to deconvolve the values into the correct inputs based on the context vector C which is calculable from the rest of the input sequence. I suppose one can't exclude that the network may never get this wrong, but in practical terms, it seems to work well enough.

  • @vohiepthanh9692
    @vohiepthanh9692 5 місяців тому +1

    Penta BAM!!! All of your videos are extremely easy to understand in a peculiar way, they have helped me a lot, thank you very much.

    • @statquest
      @statquest  5 місяців тому +1

      Glad you like them!

  • @user-et8es9vg5z
    @user-et8es9vg5z 18 днів тому +1

    I finally decided to buy your book thinking there'd be transformers in the "Neural Network" section. But even if they're not, I'm glad it supports you. Your content is the best in popularisation that I've seen. It mainly helps me a lot to refresh and understand better than before to start my internship in AI after 1 year of gap year.

    • @statquest
      @statquest  17 днів тому +1

      I'm starting a book on neural networks every soon.

  • @vladimirmihajlovic1504
    @vladimirmihajlovic1504 9 місяців тому +17

    Hey @statquest - here is a quick suggestion. Another convenient way to explain positional encoding might be by drawing clock with minute and hour hand. Then - instead of sin() and cos() functions you could simply track the x and y coordinates of the tip of the minute and hour hand. It gives much more convenient intuition behind mechanics of the encoding.
    (a) it shows its repetitive nature
    (b) ties encoding position with sense of time (which is intuitive since speech is tied to time as well). Speech is the most common way we use language
    (c) it explains why we use both sin() and cos() functions (to track circular motion of the clock hand)
    (d) it provides intuition on why having two pair of sin() and cos() functions is better than just one

    • @statquest
      @statquest  9 місяців тому +2

      That's a great idea!

    • @Ali-Aslam
      @Ali-Aslam 8 місяців тому

      So kind of like a unit circle?

  • @abdoualgerian5396
    @abdoualgerian5396 9 місяців тому +3

    We wanna more NLP material please, tiny bam !

  • @debayantalapatra2066
    @debayantalapatra2066 6 місяців тому +1

    This is the best of all that is available right now on Transformers. Thank you!!

  • @ruicai9084
    @ruicai9084 9 місяців тому +1

    I feel so lucky that I just started learning Transformer and found out StatQuest made a video for it one day ago!

  • @jessiondiwangan2591
    @jessiondiwangan2591 9 місяців тому +4

    (Verse 1)
    Here we are with another quest,
    A journey through the world of stats, no less,
    Data sets in rows and columns rest,
    StatQuest, yeah, it's simply the best.
    (Chorus)
    We're diving deep, we're reaching wide,
    In the land of statistics, we confide,
    StatQuest, on a learning ride,
    With your wisdom, we abide.
    (Verse 2)
    From t-tests to regression trees,
    You make understanding these a breeze.
    Explaining variance and degrees,
    StatQuest, you got the keys.
    (Chorus)
    We're scaling heights, we're breaking ground,
    In your lessons, profound wisdom's found,
    StatQuest, with your sound,
    We'll solve the mysteries that surround.
    (Bridge)
    With bar charts, line plots, and bell curves,
    Through distributions, we observe,
    With every lesson, we absorb and serve,
    StatQuest, it's knowledge we preserve.
    (Chorus)
    We're traversing realms, we're touching sky,
    In the field of data, your guidance, we rely,
    StatQuest, with your learning tie,
    You're the statistical ally.
    (Outro)
    So here's to Josh Starmer, our guide,
    To the realm of stats, you provide,
    With StatQuest, on a high tide,
    In the world of statistics, we stride.
    (End)
    So get ready, set, quest on,
    In the realm of stats, dawn upon,
    StatQuest, till the fear's gone,
    Keep learning, till the break of dawn.

    • @statquest
      @statquest  9 місяців тому +1

      THAT IS AWESOME!!! (what are the chords?)

    • @technicalbranch99
      @technicalbranch99 9 місяців тому +2

      @@statquest I V vi IV

  • @pratyushrao7979
    @pratyushrao7979 3 місяці тому +1

    I had never struggled so much with understanding a concept before. But you cleared all the doubts. Thank you!

    • @statquest
      @statquest  3 місяці тому

      Glad it helped!

    • @pratyushrao7979
      @pratyushrao7979 3 місяці тому +1

      @@statquest I actually had a doubt as I was going through, about the decoder part. In the masked multi head attention part of the typical transformer, what inputs do we provide? And is this part only used during training?

    • @statquest
      @statquest  3 місяці тому

      @@pratyushrao7979 I actually talk about masking in my video on decoder-only transformers here: ua-cam.com/video/bQ5BoolX9Ag/v-deo.html

  • @adithyakumar1111
    @adithyakumar1111 6 місяців тому +1

    Thank you Josh for this fantastic video. One of the best videos to explain the math behind the Query, Key and Values.

  • @rishabhjain1468
    @rishabhjain1468 9 місяців тому +1

    much awaited and anticipated video!!, Tysm

  • @NethaneelEdwards
    @NethaneelEdwards 9 місяців тому +1

    Been waiting daily for this. Here we go! Thanks!

  • @hamidrezahosseinkhani5980
    @hamidrezahosseinkhani5980 5 місяців тому

    It was incredible. step-by-step, clear and concise, detailed enough. great great. thank you for such an amazing video!

    • @statquest
      @statquest  5 місяців тому

      Glad you enjoyed it!

  • @sdsa007
    @sdsa007 8 місяців тому +2

    Transformers! More than meets the eye!? I think there is a lot of value in knowing this technology well! Thank you for your humor and learning support, I can't wait to return the favor!

  • @johnas3
    @johnas3 9 місяців тому +1

    Thank you!! Still need some time to digest such a big concept… But worth for waiting! Hooray🎉

    • @statquest
      @statquest  9 місяців тому

      Yep - this is a big one! :)

  • @iwokeupdead1093
    @iwokeupdead1093 6 місяців тому +1

    I'm currently studying for job interviews and I don't know what I would do without you, thank you! When I get paid from my first job I will donate to you :)

  • @TheSuperFlyo
    @TheSuperFlyo 7 місяців тому +1

    We have been waiting for this!! Awesome

    • @statquest
      @statquest  7 місяців тому

      Thank you very much!

  • @okay730
    @okay730 9 місяців тому +1

    I HAVE BEEN WAITING SO LONG FOR THIS VIDEO TYSM

  • @kidley17
    @kidley17 9 місяців тому +1

    Although is way beyond my area of knowledge I love to watch your videos, it brings me a warm nostalgic feeling from college and reminds me how awesome statistics are.

    • @statquest
      @statquest  9 місяців тому +1

      That's awesome!

    • @kidley17
      @kidley17 9 місяців тому +1

      @@statquest BAM 🔥

  • @starlord3286
    @starlord3286 3 місяці тому

    I like how he says "In this example we kept things super simple".
    Great video, thank you!

    • @statquest
      @statquest  3 місяці тому

      Glad you liked it!

  • @brianprzezdziecki
    @brianprzezdziecki 9 місяців тому +1

    Holy crap I’ve been waiting for this for months!!! Finally!

  • @silver_soul98
    @silver_soul98 9 місяців тому +1

    was waiting for this one. thanks so much man.

  • @yashsvidixit7169
    @yashsvidixit7169 3 місяці тому +1

    A lot of hard work must have gone into these videos. And the results are these brilliant super helpful videos. Thanks a lot for these videos.

    • @statquest
      @statquest  3 місяці тому

      Glad you like them!

  • @adarshvemali2966
    @adarshvemali2966 3 місяці тому +1

    What a legend, there is no better channel than this!

  • @jyotsnachoudhary8999
    @jyotsnachoudhary8999 9 місяців тому +1

    Thanks a lot @Josh for this comprehensive video on Transformers. It was really helpful!

    • @statquest
      @statquest  9 місяців тому

      bam! :)

    • @jyotsnachoudhary8999
      @jyotsnachoudhary8999 9 місяців тому

      @@statquestHey Josh, I have a doubt that I'd like your help with. I noticed that the decoded token for EOS is "vamos," but I expected it to be since the self-attention and Encoder-decoder attention for should be the highest. Could you please explain this?

    • @statquest
      @statquest  9 місяців тому

      @@jyotsnachoudhary8999 is just what we use to initialize the decoding and the network is trained to use the encoder-decoder attention to convert that to "vamos" (this transformer can also correctly translate "to go" to "ir").

    • @jyotsnachoudhary8999
      @jyotsnachoudhary8999 9 місяців тому +1

      @@statquest Ah, okay. Got it. Thanks a lot :))

  • @erikleichtenberg3950
    @erikleichtenberg3950 2 місяці тому +1

    1 million subscribers and still taking the time to answer questions from his viewers. Absolute legend

    • @statquest
      @statquest  2 місяці тому

      BAM! :)

    • @luisfernando5998
      @luisfernando5998 2 місяці тому

      Bet it’s an AI bot answering 🤖

    • @statquest
      @statquest  2 місяці тому +1

      @@luisfernando5998 Nope - it's me. I really read all the comments and respond to as many as I can.

    • @luisfernando5998
      @luisfernando5998 2 місяці тому +1

      @@statquest do u have a team ? 🤔 how do u manage the time ? 🤯

    • @statquest
      @statquest  2 місяці тому

      @@luisfernando5998 It only takes about 30 minutes a day. It's not that big of a deal.

  • @Isakilll
    @Isakilll 8 місяців тому +1

    Just wanted to say that I understood everything about LMs (thanks to your videos), except the part on transformers cuz the video wasn't out yet ahah. Well now that my dear squash teacher explained it, everything's clear. So really THANK YOU for your hard work and dedication, it made all the difference in my understanding of Neural Networks in general

  • @thomasdeneux
    @thomasdeneux 5 місяців тому +1

    thank you very much for this impressive work! it is so important that we can all have a grasp of how this works

  • @gyuio100
    @gyuio100 9 місяців тому +1

    Very clear and builds up the concepts in a step by step manner, rather than starting with the overall architrcture.

  • @dkkkkkkk
    @dkkkkkkk 9 місяців тому +1

    This is a masterpiece! Appreciated it, Squatch!

  • @ItIsJan
    @ItIsJan 9 місяців тому +1

    we have been waiting for so long! thanks

  • @akarimsiddiqui7572
    @akarimsiddiqui7572 29 днів тому +1

    I finally found you! Thank you for this detailed yet super simple break down.

  • @spartan9729
    @spartan9729 7 місяців тому

    This is your only video that I had to see twice to get complete idea of the topic. Transformers really is a decently tough topic.

    • @statquest
      @statquest  7 місяців тому +1

      This is a lot of material for one video. But people wanted a single video, rather than a series of videos making incremental steps in learning, for transformers. Personally, I would have preferred a sequence of shorter videos, each focused on just one part. That said, there is something about seeing it all at once and getting that big picture. My book on neural networks (that I'm working on right now) will try to do both - take things one step at a time and give a big picture.

    • @spartan9729
      @spartan9729 7 місяців тому +1

      @@statquest Nice. Waiting for the book in that case.

  • @manuelapacheco9129
    @manuelapacheco9129 7 місяців тому +1

    man i love you for this video thank you so much, there's absolutely no way i'd have understood all of this without your help

    • @statquest
      @statquest  7 місяців тому

      Glad I could help!

  • @carleanoravelzawongso
    @carleanoravelzawongso 8 місяців тому +2

    Please create more vids!! Your explanations are truly beautiful, such a work of art. I couldn't agree more that you are one of the most brilliant teachers at statistic and ML! Actually, I wanna hug you right now haha

  • @JavierSanchez-yc8qo
    @JavierSanchez-yc8qo 29 днів тому +1

    @statquest you are a true professional and a master of your craft. The field of ML is getting a little stronger each day bc of content like this!

  • @pranav7471
    @pranav7471 4 дні тому +1

    A great explanation of Transformer, the one thing I found missing was the decoder has a masked self attention, to prevent future embedding from "leaking" into current output

    • @statquest
      @statquest  4 дні тому +1

      For an encoder-decoder transformer, masked self-attention is only used during training, which this video doesn't cover. However, I cover it in my video on Decoder-Only Transformers here: ua-cam.com/video/bQ5BoolX9Ag/v-deo.html

  • @user-xp2gc7tm8h
    @user-xp2gc7tm8h Місяць тому +1

    the best and simplest video to learn transformer ever!

  • @BooleanDisorder
    @BooleanDisorder 2 місяці тому +1

    This is so mindblowingly complex and impressive. Great video! ❤
    The transformer architecture is also complex and impressive, ofc. 😊

  • @theunconventionalenglishman
    @theunconventionalenglishman Місяць тому +1

    I've recently discovered your channell and I love it - the songs rule.
    Cheers mate

  • @abeeRidge
    @abeeRidge 9 місяців тому +1

    What a clean, easy to follow video!

    • @statquest
      @statquest  9 місяців тому

      Thank you very much! :)

  • @tudor6210
    @tudor6210 Місяць тому +1

    Thank you!! One of the best explanations of transformers out there.

  • @bin4ry_d3struct0r
    @bin4ry_d3struct0r 9 місяців тому

    The amount of detail that went into this must've taken A LOT of work. Kudos!!
    On a side note: the GPT variants are decoder-only (i.e., they do not employ an encoder component).

    • @statquest
      @statquest  9 місяців тому +2

      Yep. I'd like to create a video on decoder only transformers soon.

    • @bin4ry_d3struct0r
      @bin4ry_d3struct0r 9 місяців тому +1

      @@statquest Looking forward to it!

  • @heike_p
    @heike_p 3 місяці тому +1

    I'm following an advanced master of Artificial Intelligence. This whole NN playlist has saved me while studying for my exams! Thanks a bunch!

  • @jiayuemao4985
    @jiayuemao4985 4 місяці тому +1

    Nice video! Thank you for explaining so clearly! As a starter, this video helps a lot.

  • @michaelcharlesthearchangel
    @michaelcharlesthearchangel Місяць тому +1

    Bravo! Excellent teaching skills! Teaching weights and biases is not easy but, by God, man, you've done it!

    • @statquest
      @statquest  Місяць тому

      Thank you very much!

  • @pypypy4228
    @pypypy4228 9 місяців тому +1

    A long anticipated video! ❤

  • @howardhao-chunchuang6742
    @howardhao-chunchuang6742 5 місяців тому +1

    Thank you for your wonderful work and crystally clear explanations. Finally K, Q, & V make sense.