Transformers for beginners | What are they and how do they work

Поділитися
Вставка
  • Опубліковано 2 гру 2024

КОМЕНТАРІ • 160

  • @ashermai2962
    @ashermai2962 2 роки тому +49

    This channel deserves more views and likes

  • @andybrice2711
    @andybrice2711 8 місяців тому +5

    Positional encodings are not that weird when you think of them as being similar to the hands on a clock: It's a way of representing arbitrarily long periods of time, within a confined space, with smooth continuous movement and no sudden jumps.
    Picture the tips of clock hands. Their vertical position follows a sine wave, their horizontal position follows a cosine wave. And we add precision with more hands moving at different speeds.

  • @pierluigiurru962
    @pierluigiurru962 Рік тому +26

    This is clearest explanation of transformers I’ve found so far, and I personally have seen many trying to wrap my head around them. No skimming over details. Very well done!

  • @Yaddu143
    @Yaddu143 Рік тому +4

    I really want you talk about attention. Thank you, shinning in this video.

  • @stevemassicotte4068
    @stevemassicotte4068 Рік тому +16

    @16:14,, the binary table is wrong, there are two sevens.
    The second column should start with 8 and not a second 7.
    Attention is all you need ;)
    Thanks for the video !

    • @BCSEbadulIslam
      @BCSEbadulIslam 9 місяців тому

      Came here to comment the same 👍

  • @testing3562
    @testing3562 Рік тому +17

    I am a programmer, I have created many tools that were actually very useful. I even claim that I have 10+ years experience. But I feel very bad to realize that I am so dumb that I did not understand anything after the first 10 minutes of the video.

    • @s14vc
      @s14vc 8 місяців тому

      They explain it with apples and pears but is actually a very mathematical and elaborated process, if you're not the kind of person that can remember easily how work the sine and cosine functions and do matrix multiplication for fun, is just a little bit harder to get it

    • @moonlight-td8ed
      @moonlight-td8ed 7 місяців тому +1

      BRUH JUST REWATCH IT AGAIN... THE VIDEO IS A 10/10

    • @rodneykingston6420
      @rodneykingston6420 3 місяці тому

      I too am a programmer. I don't know why everyone is saying this is such a great video. To understand all the information that is so rapidly and breezily dispensed by the speaker would clearly require a vast pre-existing knowledge of the subject. It is NOT for beginners.

    • @testing3562
      @testing3562 3 місяці тому

      ​@@rodneykingston6420 I am neither a beginner nor I understood any intricate details of these concepts. But I realised that we need not bother to go that deep into any concept inorder to create an innovative product. Based on my own experience I was depressed at first, then despite my ignorance in these topics, I continue to create really useful working tools and the learning happens automatically. Life is short and we need not have to spend the best part of it learning every new budding technologies that might fail. It happens to be more than sufficient if we know only that part that is required for that specific situation. Lesson learned the hardway is that, too much of AI is just a distraction and waste of time.

    • @ajithsdevadiga1603
      @ajithsdevadiga1603 2 місяці тому

      ​@@rodneykingston6420 you got it right, you need to check jay alamers blog on transformers where he has clearly explained it detail, after reading that you will get a clear picture and at later point of time you might wanna check the research paper, which I am sure is not easy to understand at once, couple of tutorials and blogs on this subject will make sense only if you have prior know ledge of neural networks and the math behind it.

  • @reshamgaire4188
    @reshamgaire4188 Рік тому +4

    Finally found a perfect video that cleared all my confusions. Thank you so much ma'am, may god bless you 🙏

  • @PeterKoman
    @PeterKoman 2 роки тому +3

    Finally a transformer video that actually explains the theory in understandable way. Many thanks.

  • @Zulu369
    @Zulu369 Рік тому +8

    This video is the best technical explanation I have seen in years. Although Transformers are a breakthrough in the field in NLP, I am convinced that they do not describe completely and satisfactorily, the way humans process language.
    For all civilizations, spoken language predates written language in communications. Those who do not read and write, still communicate clearly with others. This means humans do not represent natural language in their brains in terms of words, syntax and position of tokens but rather in terms of symbols, images and multimedia shows that make up stories we relate to.
    Written language comes only later as an extra layer of communication to express transparently these internal representations that we carry within ourselves. If AI is able to access and decode these internal representations, then the written language, the extra layer, becomes a lot easier to understand, organize, and put on paper with simple techniques rather than using these intricate Transformers that I consider as temporary and unnatural ways of describing natural languages.

    • @rokljhui864
      @rokljhui864 Рік тому

      Your idea is represented above , in words, existing separately from your mind. Surely most intelligence is contained within written language, mathematical expression and images.

    • @Zulu369
      @Zulu369 Рік тому

      @@rokljhui864 As I explained above, written words make up THE extra layer that is actually not necessary once you learn more persuasive communications techniques.

    • @evetsnilrac9689
      @evetsnilrac9689 Рік тому +1

      ​@@rokljhui864 "Surely" is not how you start an intelligent hypothesis.
      You must explain the rationale for your belief since it is not at all readily apparent that the intelligence to process written language was not already in our brains so that we could conceive of and learn written language.

    • @evetsnilrac9689
      @evetsnilrac9689 Рік тому +1

      This is a crucial point to understand for all of us interested in fully harnessing what we perceive to be the true potential of this technology.
      I would start with the Adamic symbol-based language.

  • @akintilotimileyin6202
    @akintilotimileyin6202 2 місяці тому

    This is the best explanation of transformer architecture I have seen so far. Thanks.

  • @moeal5110
    @moeal5110 Рік тому +1

    This is most clear and resourceful video I've seen. Thank you for your hard work and for sharing these resources

  • @mohamadhasanzeinali3674
    @mohamadhasanzeinali3674 2 роки тому +4

    I saw numerous videos about Transformers architecture. In my opinion, your video is the best among them. Appreciate that.

    • @AssemblyAI
      @AssemblyAI  2 роки тому +1

      Thank you, that is great to hear. :)

  • @ajithsdevadiga1603
    @ajithsdevadiga1603 2 місяці тому

    First time when I saw this video, I was like what is this lady talking about, then after reading jay alamers blog on transformers gave me a clear picture of the underlying math, rewatching this video after doing bit of reading will actually help you to connect the dots.

  • @dooseobkim2100
    @dooseobkim2100 Рік тому +3

    You are my savior for being actually able to get ready to read all of those AI related papers which I’m completely unaware of. I was stuck at the part of my thesis which I have to provide theoretical background of ChatGPT. As a business student I’m super grateful to learn these knowledges in computer science through your short lecture👍👍

  • @sivad2895
    @sivad2895 Рік тому +1

    The best video on transformer architecture with great explanations and charming presentation.

  • @abooshehrian
    @abooshehrian 2 місяці тому

    great summary. One of the densest videos I've watched yet it was explained with so many examples. Thank you!

  • @geekyprogrammer4831
    @geekyprogrammer4831 2 роки тому +1

    This high quality video deserves a lot more views!

  • @nikhilshrestha4711
    @nikhilshrestha4711 Рік тому +4

    really love how you described the model. easier to understand 🙌

  • @vivekpetrolhead
    @vivekpetrolhead 11 місяців тому

    Best explanation for beginners I've seen besides statquest

  • @StoriesFable
    @StoriesFable 7 місяців тому

    I'm watching lot of videos of Transformers, But that is exactly I want. Thank You So Much Ma'am. And also AssemblyAl.

  • @moonlight-td8ed
    @moonlight-td8ed 7 місяців тому

    cleanest and most informative video ever.. covered whole attention is all you need paper in 19 mins.. damn.. thank you MISRA TURP and assembly ai

  • @yourshanky
    @yourshanky Рік тому +1

    Excellent explanation !! Sharp and clear. Thanks for sharing this.

  • @imagnihton2
    @imagnihton2 2 роки тому +12

    This made the concept sound incredibly simple compared to some other sources... Amazing!

    • @AssemblyAI
      @AssemblyAI  2 роки тому +1

      Great to hear, thank you!

  • @abinav92
    @abinav92 Рік тому

    Best video on intro to transformers!!!

  • @shubham-pp4cw
    @shubham-pp4cw 2 роки тому +1

    clear explanation of quiet complex topic and explained easily in shorted period time

    • @AssemblyAI
      @AssemblyAI  2 роки тому

      Glad to hear you liked it!

  • @lexflow2319
    @lexflow2319 2 роки тому +3

    I don't understand why there are 6 decoders and encoders. The diagram shows 1 each. Also, what is the output as input to the decoder. Is that the last output from final softmax

  • @MrTheyosyos
    @MrTheyosyos Рік тому +1

    "attentions for beginners" will be great :)

  • @salamander7715
    @salamander7715 Рік тому

    Seeing all the comments of people saying that this video made things simple just makes me feel stupid ahah! This video is amazing and the explanations are great, but i can't say i've understood more then 35% of the concepts. I'll have to watch this several times for sure

  • @bdoriandasilva
    @bdoriandasilva 2 роки тому +1

    Great video with a clear explanation. thank you!

  • @nikhil182
    @nikhil182 Рік тому

    Thank you so much!💓this has to be the best introduction video to Transformers. We are planning to use Transformers for our Video Processing project.

  • @kellenswain2049
    @kellenswain2049 Рік тому

    11:06 from reading the paper, 64 is not the square root of the length of QKV vectors, it looks like it is d_model/h where h is the number of heads used in multihead attention. And so then I assume d_model is the length of the QKV vectors?

  • @talktovipin1
    @talktovipin1 10 місяців тому

    Very nice explanation. Incorporating animations into the images while explaining would enhance comprehension and make it even more beneficial.

  • @donevo1
    @donevo1 Рік тому +1

    very nice presentation! in 12:18 you say that attention is on 8 words. from reading the paper I think that attention is on ALL the words, and 8 is the number of heads: each word vector (D=512) is split to 8, i.e vector dimention in each head is 64.

  • @amitsingh7684
    @amitsingh7684 7 місяців тому

    very nicely explained with clear details

  • @pyaephyo3633
    @pyaephyo3633 Рік тому

    i love it.
    Your explanation is easy to understand.

  • @GeorgeZoto
    @GeorgeZoto Рік тому

    Great and both low and high level descprition of transformers, thank you for creating this useful resource :)

  • @carlosroquesuarezgurruchag8681

    Thx for the time. Very clear the explanation

  • @kalyandey5195
    @kalyandey5195 10 місяців тому

    Awesome!! crystal clear explanation!!!

  • @otsogileonalepelo9610
    @otsogileonalepelo9610 Рік тому +3

    Just WOW! You broke down these concepts nicely. Thank you. Live long and prosper 🖖🖖

  • @krishnakumarik208
    @krishnakumarik208 Рік тому

    VERY GOOD EXPLANATION.

  • @SAM-t9r9b
    @SAM-t9r9b Рік тому

    I overall liked the video a lot. I just do not thing is enough to understand the whole concept. Especially masked multi head attention layer was missing and how the actually outcome of the model is created (translation etc)

  • @anandanv2361
    @anandanv2361 Рік тому

    The way you explained the concept was awesome. It is very easy to follow.👍

  • @helgefredriksen
    @helgefredriksen 5 місяців тому

    Hi, could anyone explain how the Feed Forward part of the transformer learns? How does the loss function work? By masking out some of the input from the self-attention part during training and then compare the real value with predicted value?

  • @jayanthAILab
    @jayanthAILab Рік тому

    Great work mam. You made it simple to understand.

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc Рік тому

    smile and learn and clean explaniation!!!

  • @mbrochh82
    @mbrochh82 Рік тому

    I wish someone would explain how exactly the backpropagation works and what values exactly get nudged and tweaked during learning (and by which means)

  • @maryammoradbeigi6690
    @maryammoradbeigi6690 Рік тому

    Incredible explanation on the transformer... Amazing video. Thanks a lot

  • @DAVIDBYANSI-g1o
    @DAVIDBYANSI-g1o Рік тому

    Thank you for the presentation, it has been so insightful. I wish you made a video about the word embeddings of the transformers. Thanks

  • @_joshwalter_
    @_joshwalter_ Рік тому

    This is phenomenal!

  • @dannown
    @dannown Рік тому

    This is a really lovely video -- very specific and detailed, but also followable. Thanks!

  • @wasifrock687
    @wasifrock687 Рік тому

    very well explained. thank you!

  • @hosseinsafari7514
    @hosseinsafari7514 Місяць тому

    Thank you for this good explanation. please talk about attention.

  • @sanketdeshmukh491
    @sanketdeshmukh491 2 роки тому

    Thank You for in depth explanation. Kudos!!!

  • @AbhinandanTete
    @AbhinandanTete 4 місяці тому

    thank you great explanation❤‍🔥

  • @ilkeasal7622
    @ilkeasal7622 4 місяці тому

    amazing explanation!

  • @bysedova
    @bysedova Рік тому

    Please make a detailed video about self-attantion! Thank you for your explanation! I like you haven't used difficult math terms and you have tried to explain for understanding with easy material supply.

  • @goelnikhils
    @goelnikhils 2 роки тому

    Amazing Explanation. Vow. Thanks a lot

  • @rokljhui864
    @rokljhui864 Рік тому

    Interesting. Sounds like a Fourier transform; Obtaining a frequency distribution from a time-series, reveals the underlying frequency components and amplitudes. Are you essentially distilling the 'word cycles' from the sentences to obtain meaning from the word patterns across different word combination lengths (from single word to many thousand) And, optimising the predictability of the next word automatically optimises for the appropriate word combination lengths, that align with actual meaning. i.e Understanding 'peaks' are optimised similar to the fundamental frequencies in a Fourier transform. ?

  • @hussainsalih3520
    @hussainsalih3520 Рік тому

    amazing keep doing this amazing tutorials :)

  • @near_.
    @near_. Рік тому

    What's the purpose of output embedding?? What are we feeding in that???

  • @keithwins
    @keithwins 11 місяців тому

    Thank you that was excellent

  • @guimaraesalysson
    @guimaraesalysson Рік тому +1

    Theres any video about attention mechanism ?

    • @AssemblyAI
      @AssemblyAI  Рік тому +2

      Not yet but it's a good idea!

  • @AddisuSeteye
    @AddisuSeteye Рік тому +1

    Amazing explanation. I can't wait to watch your explanation on another AI related topic.

  • @6001navi
    @6001navi Рік тому

    awesome explanation

  • @devraj241
    @devraj241 Рік тому

    great video, well explained!

    • @near_.
      @near_. Рік тому

      What's the purpose of output embedding?? What are we feeding in that???

  • @juliennoel3061
    @juliennoel3061 9 місяців тому

    hi! oh yeah please a specific video on 'attention' 🙂 - And also : 'great job you are doing! Congrats! Thumbs !!'

  • @thebiggerpicture__
    @thebiggerpicture__ 2 роки тому

    Great video. Thanks!

  • @EmanueleOlivetti
    @EmanueleOlivetti 11 місяців тому

    Around 16:00 the binary representation repeats twice 7 so the right part of the binary encoded numbers is incorrect

  • @rufus9322
    @rufus9322 Рік тому

    Thank you for your video 🤗
    How to understanding more details about word embedding method in Transformer model?

  • @niyatisrivastava4-yearb.te820
    @niyatisrivastava4-yearb.te820 11 місяців тому

    best explanation

  • @amigospot
    @amigospot 2 роки тому

    Nice video for a fairly complex architecture!

  • @kartikgadad9285
    @kartikgadad9285 Рік тому

    Thanks for explaining Transformers, can we have a video on Embeddings, seems super interesting. The Positional Encoding part was difficult to understand, as it has been just taken from abstract level, can we find better video on positional encoding?

  • @rodi4850
    @rodi4850 2 роки тому

    best explanation!

  • @RewanSallam-z3c
    @RewanSallam-z3c Рік тому

    geart work, may allah bless you and guide you 🥰🥰😍😍

  • @Techie-time
    @Techie-time 3 місяці тому

    Complete clarity, only when you know the subject 70%.

  • @JayTheMachine
    @JayTheMachine Рік тому

    thank you soo much, damn, love your explainations

  • @amparoconsuelo9451
    @amparoconsuelo9451 Рік тому

    I have read books and watched videos on Transformers. I still don't understand Transformers. I want to order from Amazon an assembly Transformer kit, work on it and have a Transformer I understand the way I undestand how Lotus 123 and Wordstar were created.

  • @0Tyr
    @0Tyr 2 роки тому

    Very informative channel, and well presented..

  • @archowdhury007
    @archowdhury007 Рік тому +1

    Beautifully explained. Loved it. First time I understood the transformer model so easily. Great work. Please keep creating more such content. Thanks.

  • @abrahamowos
    @abrahamowos 2 роки тому

    A question @ 11:30 : if for instance the values v are really large and you multiple them by the results from the softmax layer. Won't the resulting weighted be too high after adding them together?

    • @JackoMcW
      @JackoMcW Рік тому +1

      I'm not sure I understand your question or what you mean by "too high," but consider that all of those softmax values will be

  • @wp1300
    @wp1300 Рік тому

    13:35 Positional encoding

    • @near_.
      @near_. Рік тому

      What's the purpose of output embedding?? What are we feeding in that???

  • @wenshufan
    @wenshufan Рік тому

    Thank you for explaining the transformer in detail. However, I still don't get how do you train the Q,K,V matrix. The attention mechanism is calculated by from them. What type of feedback/truth can one use to train those matrix values then?

  • @andersonsystem2
    @andersonsystem2 3 роки тому +3

    Good video

  • @actorjohanmatsfredkarlsson2293

    Great video. I’m missing how the attation layers: queries, keys and values and the output weights are trainee? Also what was the values matrix for?

    • @MrAmgadHasan
      @MrAmgadHasan Рік тому

      They are trained just like any neural network: we have a loss function that compares the model's output with the desired output, and then this loss is propagated backwards to the weights and biases and we use gradient descent to update the weights.
      Lookup "back propagation" for more info or just look up"how neural networks are trained"

  • @nogur9
    @nogur9 Рік тому

    Thanks :)

  • @manjz7hm
    @manjz7hm 11 місяців тому

    You explained well , but my brain not digesting it 😂

  • @ankit9401
    @ankit9401 2 роки тому +2

    You are awesome and I appreciate your efforts. After watching your video, I can say now I understand the transformer architecture.
    I have a query. According to original BERT paper, two objectives used during training: Masked Language Model and Next Sentence Prediction. Are these training objectives present in original or all transformer models or they are specifically used for BERT ?
    I hope you make video to explain attention and BERT model in future 😊

    • @AssemblyAI
      @AssemblyAI  2 роки тому +1

      Great to hear the video was helpful Ankit! These are not the tasks that were in the original transformer model. But I think they are not specific to BERT. Other architectures also use same/similar tasks to train their models. We have a BERT video in the channel by the way. Here it is: ua-cam.com/video/6ahxPTLZxU8/v-deo.html
      - Mısra

    • @strongsyedaa7378
      @strongsyedaa7378 2 роки тому

      @@AssemblyAI
      So instead of using RNN & LSTM we directly use Transformers?

  • @titusfx
    @titusfx Рік тому

    I'm still concern how all these papers don't have any mathematical rigour, there isn't one theorem, there is nothing. And it works....🤯 I can't imagine when the rigourosity start coming in, what would be the results. I'm starting to believe that deep learning is Physics for knowledge 😅

  • @DivyanshuBhoyar-j6e
    @DivyanshuBhoyar-j6e Рік тому

    easiest explanation.

  • @robl39
    @robl39 Рік тому +3

    What is disappointing about this video is that you have to know about or understand 50 other concepts first

  • @juanpimentel4567
    @juanpimentel4567 5 місяців тому

    Why are there 6 encoders and 6 decoders. Someone please explain.

  • @snedunuri2946
    @snedunuri2946 2 місяці тому

    Too much detail at the beginning. “6 encoders” or “query/value vectors” mean very little. I recommend using a running example which introduces those concepts as needed and with the proper contexr

  • @RAZZKIRAN
    @RAZZKIRAN Рік тому

    thank u

  • @roshanverma1123
    @roshanverma1123 Рік тому

    Great simplified content! Thanks! Btw, you look beautiful!

  • @nikbl4k
    @nikbl4k 6 місяців тому

    great video, very interesting

  • @nirmesh44
    @nirmesh44 10 місяців тому

    make attention video

  • @denwo1982
    @denwo1982 9 місяців тому

    Chatgpt “explain this video to me as if I was an 8 year old”

  • @frizzsupertramp6434
    @frizzsupertramp6434 Рік тому +1

    At 16:44 the binary representations on the right side are wrong (number 7 comes twice, should start with 8 on the right side).
    (Sorry for being anal 😀)

    • @AssemblyAI
      @AssemblyAI  Рік тому

      Thanks for the heads up! Video editing gets tedious sometimes :)

  • @NielsSwimberghe
    @NielsSwimberghe 9 місяців тому

    "You might need to watch this multiple times".
    You don't say. 😅

  • @M7mdal7aj
    @M7mdal7aj Рік тому

    thanks but the explanation is not detailed enough. but nice explanation for the positional embedding. thanks