Transformers for beginners | What are they and how do they work

Поділитися
Вставка
  • Опубліковано 29 чер 2023
  • Over the past five years, Transformers, a neural network architecture, have completely transformed state-of-the-art natural language processing.
    *************************************************************************
    For queries: You can comment in comment section or you can mail me at aarohisingla1987@gmail.com
    *************************************************************************
    The encoder takes the input sentence and converts it into a series of numbers called vectors, which represent the meaning of the words. These vectors are then passed to the decoder, which generates the translated sentence.
    Now, the magic of the transformer network lies in how it handles attention. Instead of looking at each word one by one, it considers the entire sentence at once. It calculates a similarity score between each word in the input sentence and every other word, giving higher scores to the words that are more important for translation.
    To do this, the transformer network uses a mechanism called self-attention. Self-attention allows the model to weigh the importance of each word in the sentence based on its relevance to other words. By doing this, the model can focus more on the important parts of the sentence and less on the irrelevant ones.
    In addition to self-attention, transformer networks also use something called positional encoding. Since the model treats words as individual entities, it doesn't have any inherent understanding of word order. Positional encoding helps the model to understand the sequence of words in a sentence by adding information about their position.
    Once the encoder has calculated the attention scores and combined them with positional encoding, the resulting vectors are passed to the decoder. The decoder uses a similar attention mechanism to generate the translated sentence, one word at a time.
    Transformers are the model behind GPT, BERT, and T5
    #transformers #naturallanguageprocessing #nlp
  • Наука та технологія

КОМЕНТАРІ • 92

  • @lyeln
    @lyeln 3 місяці тому +7

    This is the only video around that REALLY EXPLAINS the transformer! I immensely appreciate your step by step approach and the use of the example. Thank you so much 🙏🙏🙏

  • @MrPioneer7
    @MrPioneer7 4 дні тому

    I had watched 3 or 4 videos about transformers before this tutorial. Finally, this tutorial made me understand the concept of transformers. Thanks for your complete and clear explanations and your illustrative example. Specially, your description about query, key and value was really helpful.

  • @mdfarhadhussain
    @mdfarhadhussain 4 місяці тому +2

    Very nice high level description of Transformer

  • @exoticcoder5365
    @exoticcoder5365 10 місяців тому

    Very well explained ! I can instantly grab the concept ! Thank you Miss !

  • @user-mv5bo4vf2v
    @user-mv5bo4vf2v 4 місяці тому

    Hello and Thank you so much. 1 question: I don't realize where the numbers in word embedding and positional encoding come from?

  • @PallaviPadav
    @PallaviPadav Місяць тому +1

    Accidentally I came across this video, very well explained. You are doing an excellent job .

  • @aditichawla3253
    @aditichawla3253 5 місяців тому

    Great explanation! Keep uploading such nice informative content.

  • @user-kx1nm3vw5s
    @user-kx1nm3vw5s 6 днів тому

    Its great. I have only one query as whats the input of the masked multi-head attention as its not clear to me kindly guide me about it?

  • @MAHI-kj5tg
    @MAHI-kj5tg 6 місяців тому

    Just amazing explanation 👌

  • @BharatK-mm2uy
    @BharatK-mm2uy Місяць тому

    Great Explanation, Thanks

  • @harshilldaggupati
    @harshilldaggupati 9 місяців тому +1

    Very well explained, even with such niche viewer base, keep making more of these please

  • @servatechtips
    @servatechtips 10 місяців тому

    This is a fantastic, Very Good explanation.
    Thank you so much for good explanation

  • @imranzahoor387
    @imranzahoor387 3 місяці тому

    best explanation i saw multiple video but this provide the clear concept keep it up

  • @VishalSingh-wt9yj
    @VishalSingh-wt9yj 4 місяці тому

    Well explained. before watching this video i was very confused in understanding how transformers works but your video helped me alot

  • @satishbabu5510
    @satishbabu5510 10 днів тому

    thank you very much for explaining and breaking it down 😀 comparatively so far, your explanation is easy to understand compared to other channels thank you very much for making this video and sharing to everyone❤

  • @user-dl4jq2yn1c
    @user-dl4jq2yn1c 9 днів тому

    Best video ever explaining the concepts in really lucid way maam,thanks a lot,pls keep posting,i subscribed 😊🎉

  • @pandusivaprasad4277
    @pandusivaprasad4277 4 місяці тому

    excellent explanation madam... thank you so much

  • @soravsingla6574
    @soravsingla6574 6 місяців тому +1

    Hello Ma’am
    Your AI and Data Science content is consistently impressive! Thanks for making complex concepts so accessible. Keep up the great work! 🚀 #ArtificialIntelligence #DataScience #ImpressiveContent 👏👍

  • @debarpitosinha1162
    @debarpitosinha1162 Місяць тому

    Great Explanation mam

  • @bijayalaxmikar6982
    @bijayalaxmikar6982 4 місяці тому

    excellent explanation

  • @vimalshrivastava6586
    @vimalshrivastava6586 10 місяців тому

    Thanks for making such an informative video. Please could you make a video on the transformer for image classification or image segmentation applications.

  • @soravsingla6574
    @soravsingla6574 7 місяців тому

    Very well explained

  • @TheMayankDixit
    @TheMayankDixit 7 місяців тому

    Nice explanation Ma'am.

  • @thangarajerode7971
    @thangarajerode7971 10 місяців тому

    Thanks. Concept explained very well. Could you please add one custom example (e.g finding similarity questions)using Transformers?

  • @vasoyarutvik2897
    @vasoyarutvik2897 6 місяців тому

    Very Good Video Ma'am, Love from Gujarat, Keep it up

  • @manishnayak9759
    @manishnayak9759 6 місяців тому

    Thanks Aaroh i😇

  • @AbdulHaseeb091
    @AbdulHaseeb091 Місяць тому

    Ma'am, we are eagerly hoping for a comprehensive Machine Learning and Computer Vision playlist. Your teaching style is unmatched, and I truly wish your channel reaches 100 million subscribers! 🌟

    • @CodeWithAarohi
      @CodeWithAarohi  Місяць тому +1

      Thank you so much for your incredibly kind words and support!🙂 Creating a comprehensive Machine Learning and Computer Vision playlist is an excellent idea, and I'll definitely consider it for future content.

  • @sahaj2805
    @sahaj2805 2 місяці тому

    The best explanation of transformer that I have got on the internet , can you please make a detailed long video on transformers with theory , mathematics and more examples. I am not clear about linear and softmax layer and what is done after that , how training happens and how transformers work on the test data , can you please make a detailed video on this?

    • @CodeWithAarohi
      @CodeWithAarohi  2 місяці тому +1

      I will try to make it after finishing the pipelined work.

    • @sahaj2805
      @sahaj2805 2 місяці тому

      @@CodeWithAarohi Thanks will wait for the detailed transformer video :)

  • @akshayanair6074
    @akshayanair6074 10 місяців тому

    Thank you. The concept has been explained very well. Could you please also explain how these query, key and value vectors are calculated?

    • @CodeWithAarohi
      @CodeWithAarohi  10 місяців тому

      Sure, Will cover that in a separate video.

  • @burerabiya7866
    @burerabiya7866 3 місяці тому

    can you please upload the presentation

  • @minalmahala5260
    @minalmahala5260 Місяць тому

    Really very nice explanation ma'am!

  • @user-wh8vy9ol8w
    @user-wh8vy9ol8w 12 днів тому

    Can you please let us know I/p for mask multi head attention. You just said decoder. Can you please explain. Thanks

  • @mahmudulhassan6857
    @mahmudulhassan6857 9 місяців тому

    maam can you please make one video of classification using multi-head attention with custom dataset

  • @_seeker423
    @_seeker423 3 місяці тому

    Can you also talk about the purpose of the 'feed forward' layer. looks like its only there to add non-linearity. is that right?

    • @abirahmedsohan3554
      @abirahmedsohan3554 Місяць тому

      Yes you can say that..but mayb also for make key, quarry and value trainable

  • @user-gf7kx8yk9v
    @user-gf7kx8yk9v 7 місяців тому

    how to get pdfs mam

  • @_seeker423
    @_seeker423 3 місяці тому

    Question about query, key, value dimensionality
    Given that
    query is a word that is looking for other words to pay attention to
    key is a word that is being looked at by other words
    shouldn't query and word be a vector of size the same as number of input tokens? so that when there is a dot product between query and key the word that is querying can be correctly (positionally) dot product'd with key and get the self attention value for the word?

    • @CodeWithAarohi
      @CodeWithAarohi  3 місяці тому +1

      The dimensionality of query, key, and value vectors in transformers is a hyperparameter, not directly tied to the number of input tokens. The dot product operation between query and key vectors allows the model to capture relationships and dependencies between tokens, while positional information is often handled separately through positional embeddings.

  • @sahaj2805
    @sahaj2805 2 місяці тому

    Can you please make a detailed video explaining the Attention is all you need research paper line by line, thanks in advance :)

  • @niluthonte45
    @niluthonte45 7 місяців тому

    thank you mam

  • @akramsyed3628
    @akramsyed3628 6 місяців тому

    can you please explain 22:07 onward

  • @palurikrishnaveni8344
    @palurikrishnaveni8344 10 місяців тому

    Could you make a video on image classification for vision transformer, madam ?

  • @sukritgarg3175
    @sukritgarg3175 2 місяці тому

    Great Video ma'am could you please clarify what you said at 22:20 once again... I think there was a bit confusion there.

  • @tss1508
    @tss1508 6 місяців тому

    Didn't understand what is the input to the masked multi head self attention layer in the decoder, Can you please explain me?

    • @CodeWithAarohi
      @CodeWithAarohi  6 місяців тому +1

      In the Transformer decoder, the masked multi-head self-attention layer takes three inputs: Queries(Q), Keys(K) and Values(V)
      Queries (Q): These are vectors representing the current positions in the sequence. They are used to determine how much attention each position should give to other positions.
      Keys (K): These are vectors representing all positions in the sequence. They are used to calculate the attention scores between the current position (represented by the query) and all other positions.
      Values (V): These are vectors containing information from all positions in the sequence. The values are combined based on the attention scores to produce the output for the current position.
      The masking in the self-attention mechanism ensures that during training, a position cannot attend to future positions, preventing information leakage from the future.
      In short, the masked multi-head self-attention layer helps the decoder focus on relevant parts of the input sequence while generating the output sequence, and the masking ensures it doesn't cheat by looking at future information during training.

  • @techgirl6451
    @techgirl6451 6 місяців тому

    hello maa is this transform concept same for transformers in NLP?

    • @CodeWithAarohi
      @CodeWithAarohi  6 місяців тому

      The concept of "transform" in computer vision and "transformers" in natural language processing (NLP) are related but not quite the same.

  • @KavyaDabuli-ei1dr
    @KavyaDabuli-ei1dr 3 місяці тому

    Can you please make a video on bert?

  • @kadapallavineshnithinkumar2473
    @kadapallavineshnithinkumar2473 10 місяців тому

    Could you explain with python code which would be more practical. Thanks for sharing your knowledge

  • @saeed577
    @saeed577 3 місяці тому

    I thought it's transformers in CV. all explanations were in NLP

    • @CodeWithAarohi
      @CodeWithAarohi  3 місяці тому

      I recommend you to understand this video first and then check this video: ua-cam.com/video/tkZMj1VKD9s/v-deo.html After watching these 2 videos, you will understand properly the concept of transformers used in computer vision. Transformers in CV are based on the idea of transformers in NLP. SO its better for understanding if you learn the way I told you.

  • @Red_Black_splay
    @Red_Black_splay Місяць тому

    Gonna tell my kids this was optimus prime.

    • @CodeWithAarohi
      @CodeWithAarohi  Місяць тому

      Haha, I love it! Optimus Prime has some serious competition now :)

  • @jagatdada2.021
    @jagatdada2.021 6 місяців тому

    Use mic, background noise irritate