What are Transformers (Machine Learning Model)?

Поділитися
Вставка
  • Опубліковано 10 бер 2022
  • Learn more about Transformers → ibm.biz/ML-Transformers
    Learn more about AI → ibm.biz/more-about-ai
    Check out IBM Watson → ibm.biz/more-about-watson
    Transformers? In this case, we're talking about a machine learning model, and in this video Martin Keen explains what transformers are, what they're good for, and maybe ... what they're not so good at for.
    Download a free AI ebook → ibm.biz/ai-ebook-free
    Read about the Journey to AI → ibm.biz/ai-journey-blog
    Get started for free on IBM Cloud → ibm.biz/Bdf7QA
    Subscribe to see more videos like this in the future → ibm.biz/subscribe-now
    #AI #Software #ITModernization

КОМЕНТАРІ • 136

  • @command.terminal
    @command.terminal 5 місяців тому +10

    In our graduation years we used to learn about something called codec, as in coder-decoder (something like modem for modulation-demodulation or balun for balanced-unbalanced in the domain of communication technology. So as I can understand from the video is that the transformers are just a fancy and advanced name for a codec, which functions at much bigger capitalistic scale.

  • @ChatGPt2001
    @ChatGPt2001 23 дні тому +8

    Transformers are a type of machine learning model used primarily for natural language processing (NLP) tasks. They have revolutionized the field of NLP due to their ability to handle long-range dependencies and capture complex linguistic patterns. Here are key points about transformers:
    1. **Attention Mechanism**: Transformers use an attention mechanism that allows them to weigh the importance of different words or tokens in a sequence when processing input data. This mechanism enables the model to focus on relevant information while ignoring irrelevant or redundant parts.
    2. **Self-Attention**: In a transformer model, self-attention refers to the process of computing attention scores between all pairs of words or tokens in an input sequence. This mechanism allows the model to capture dependencies between words regardless of their positions in the sequence.
    3. **Multi-Head Attention**: Transformers often employ multi-head attention, where multiple attention heads operate in parallel. Each attention head learns different aspects of the input data, enhancing the model's ability to extract meaningful information.
    4. **Encoder-Decoder Architecture**: Transformers typically consist of an encoder-decoder architecture. The encoder processes the input sequence, while the decoder generates the output sequence. This architecture is commonly used in tasks like machine translation and text generation.
    5. **Positional Encoding**: Since transformers do not inherently understand the order of tokens in a sequence like recurrent neural networks (RNNs), they use positional encoding to provide information about token positions. This allows the model to consider sequence order during processing.
    6. **Transformer Blocks**: A transformer model is composed of multiple transformer blocks stacked together. Each block contains layers such as self-attention layers, feedforward layers, and normalization layers. The repetition of these blocks enables the model to learn hierarchical representations of the input data.
    7. **BERT and GPT**: Two popular transformer-based models are BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pretrained Transformer). BERT is designed for tasks like sentiment analysis and question answering, while GPT focuses on generating human-like text.
    Transformers have significantly advanced the capabilities of NLP models, leading to breakthroughs in areas such as language translation, text summarization, sentiment analysis, and dialogue systems.

  • @claudiamariariveraguevara7376
    @claudiamariariveraguevara7376 7 місяців тому +1

    Thanks you for your enthusiasm and explanation , by far the best

  • @ArchieLuxtonGB
    @ArchieLuxtonGB 2 роки тому +6

    Hi Martin from the Homebrew Challenge! ML and beer clearly go hand in hand!

  • @ms.barrio4402
    @ms.barrio4402 Рік тому +17

    I really love your videos as they are really friendly to understand. Really graceful for the high quality of the synthesis of key messages on AI/ML/DL. I am a medical doctor and biomedical researcher. I can see the great potential of using the different technics to further develop a bunch of areas, for example: economic evaluations based on modeling (using a combination of approaches in the sensitivity analysis to find out the internal consistence of the predictions…to gain internal validity as a cornerstone to have external validity). So, looking forward to learn more through your channel.
    Thank you, again for sharing good quality knowledge.
    L.

    • @ms.barrio4402
      @ms.barrio4402 Рік тому

      Congratulations to all the team work!, I will keep learning more. Thank you all, Leslie.

  • @GregHint
    @GregHint 11 місяців тому +4

    What a great way to introduce the topic. First 4 seconds made me laugh out loud. Well done (and the rest of the video as well)

  • @amarnamarpan
    @amarnamarpan 10 місяців тому +27

    Dr. Ashish Vaswani is a pioneer and nobody is talking about him. He is a scientist from Google Brain and the first author of the paper that introduced TANSFORMERS, and that is the backbone of all other recent models.

    • @user-uv2sy5je4z
      @user-uv2sy5je4z 8 місяців тому

      Agreed

    • @AK-ex5md
      @AK-ex5md 2 місяці тому

      He should be documenting his work like our guy, and make interesting vids.
      Hope it happens.

  • @hassanjaved906
    @hassanjaved906 Рік тому

    like to see the energy which you put on to it, Thanks for this.

  • @goldencinder7650
    @goldencinder7650 Рік тому +1

    I have been more then blown away by the unfathomable exponential growth from just increasing transformers an a few weights lol

  • @jaimeeduardo159
    @jaimeeduardo159 Рік тому +45

    Banana joke GPT-4:
    Sure, here's a banana joke for you:
    Why did the banana go to the doctor?
    Because it wasn't peeling very well!

    • @evaar440
      @evaar440 Рік тому +2

      Good transformer 🤣

  • @garfocarro
    @garfocarro Рік тому +67

    is the fact that he is able to write text mirrored incredible or is there a simple trick here?

    • @IBMTechnology
      @IBMTechnology  Рік тому +96

      There is a trick. Hint: he's not left handed.

    • @vaibhavthalanki6317
      @vaibhavthalanki6317 Рік тому +27

      its flipped and rotated, done through editing

    • @leihejun844
      @leihejun844 Рік тому +3

      @@IBMTechnology yeah, I though he can't be left handed.

    • @leihejun844
      @leihejun844 Рік тому

      @@vaibhavthalanki6317 it's not a glass, it's a mirror I think.

    • @somehhakarima5408
      @somehhakarima5408 Рік тому

      @@IBMTechnology thought he was left handed

  • @user-kc8qb8qf7r
    @user-kc8qb8qf7r 4 місяці тому

    Thank your video,your video really easy understand

  • @nikhilranka9660
    @nikhilranka9660 11 місяців тому +4

    Thanks for this video - a simple and concise introduction to transformers.
    Do large language models really possess reasoning capabilities? Or, the way they operate makes it seem so.

  • @ilhamije
    @ilhamije Рік тому +1

    Thank you!

  • @noahwilliams8996
    @noahwilliams8996 Рік тому +4

    How does the transformer take something of variable length (like a sentence) and shove it into a neural network (which requires a fixed number of inputs)?

    • @anushka.narsima
      @anushka.narsima 11 місяців тому +1

      Generic NNs take only fixed inputs but this is one of the specialities of these types of models! RNNs (the older model used for NLP) were created back in the 80s addressing mainly this issue, along with memory being important for sequences. LSTMs n now transformers came in to solve the issues with RNNs

  • @steriowang
    @steriowang Місяць тому

    Actually, I'm interested in the hand writing presentation style. How is it made ?

  • @zackmertz3214
    @zackmertz3214 Рік тому +6

    Great video! I'm stumped on how you made this. Did you really write backwards? Can you reveal your magic trick?

    • @JoshWalshMusic
      @JoshWalshMusic Рік тому +7

      You write it naturally and then flip the video when editing.

    • @AK-ex5md
      @AK-ex5md 2 місяці тому

      Exactly what's gng on in my mind lmao

  • @yasmincohen-sason3325
    @yasmincohen-sason3325 Рік тому

    This was greate!!!

  • @AbdulRahman-tj3wc
    @AbdulRahman-tj3wc 8 місяців тому

    Are encoders and decoders both RNN? Plz clear my doubt.

  • @albertkwan4261
    @albertkwan4261 11 місяців тому

    This is the pinnacle performance of training.

  • @didyouknowamazingfacts2790
    @didyouknowamazingfacts2790 Рік тому +38

    The Transformer technology is the reason why you see AI everywhere.

  • @thirtydays1982
    @thirtydays1982 Рік тому +1

    how do i use transformers on a new pair of language?

  • @user-il9vr9oe7b
    @user-il9vr9oe7b 16 днів тому

    How do you get loads of loss on on a neural network in given ways for analytics

  • @tahmeed702
    @tahmeed702 8 місяців тому

    Need Explanation for GRU , BERT , LSTM

  • @ibrahemahmed6399
    @ibrahemahmed6399 2 місяці тому

    I think he write on the glass normally and the camera got it backword so they montage it to be flipped so the written words can be shown notmally.

  • @udayvadecha2973
    @udayvadecha2973 3 місяці тому

    You are mirror writing, Great skill🤩

  • @sabahshams1582
    @sabahshams1582 Місяць тому

    Hi, what does an autoregressive language model mean?

  • @EarningsApps
    @EarningsApps Рік тому +1

    can we use transformers over spacy for NER?

  • @markadyash
    @markadyash 2 роки тому +2

    how can text algorithm (transformer) work in image domain like vision transformer over CNN

    • @ChocolateMilkCultLeader
      @ChocolateMilkCultLeader 2 роки тому +1

      Transformers are being used in many ways. For example you could take a bunch of vectors (representing image features extracted from Convolutions) and feed them into Transformers to decode as text. This gives you a lot of power combining the NLP and Computer Vision Domain

    • @strongsyedaa7378
      @strongsyedaa7378 Рік тому

      @@ChocolateMilkCultLeader
      Generic features or specific?

    • @ChocolateMilkCultLeader
      @ChocolateMilkCultLeader Рік тому

      @@strongsyedaa7378 what do you mean?

  • @raghavendrasooda5368
    @raghavendrasooda5368 7 місяців тому

    Sir Will u give me a research topic in transformer

  • @daniel_tenner
    @daniel_tenner Місяць тому

    “Before too long, they might even be able to come up with jokes that are actually funny.”
    2 years later, here’s the banana joke ChatGPT 4 (already 1y old) came up with for me.
    > Why did the banana go to the doctor?> Because it wasn't peeling well!
    I think we can call that a win.

  • @SciFiFactory
    @SciFiFactory 27 днів тому

    So is it like ... a layered, parallelized autoencoder?

  • @hobonickel840
    @hobonickel840 Рік тому +2

    Does this mean they can fix my adhd?
    I don't quite know why but all this transformer tech helps me understand my own glitched mind better

  • @anatolydyatlov963
    @anatolydyatlov963 3 місяці тому

    How are you able to write a mirror image of the words so effortlessly? :O

  • @1HARVEN1
    @1HARVEN1 Рік тому +2

    Hey its the guy from the beer channel...

  • @ramielkady938
    @ramielkady938 Місяць тому

    Things are judged by their appearance. And this video looks way way better than it actually is. That explains the views.

  • @sudarshinirasa6913
    @sudarshinirasa6913 2 роки тому +2

    Can we use this method to detect outliers in time series data

    • @TheShawMustGoOn
      @TheShawMustGoOn 2 роки тому +1

      While you can use Transformers for Time Series, I'm not sure why you'd want some network architecture to look for outliers instead of regularizing it and let the network learn to ignore those during optimization.

    • @coffle1
      @coffle1 Рік тому +2

      Transformers are a bit overkill for anomaly detection. A lot of time more traditional methods might perform better faster (especially if the resources for training the models are constrained like not having dedicated chips or an insufficient amount of training data)

  • @robb1324
    @robb1324 Рік тому +71

    Perhaps the AI made the banana joke as a subtle way to tell us humans that we are a cruel species that mash anything we come across. The AI finds it funny because the banana would rather cross the road and take on the high likelihood of being mashed violently by a vehicle to avoid the certain mashing by humans. Perhaps the AI identified with the banana 🤔

    • @st0a
      @st0a Рік тому +15

      Next level empathy: thinking about a banana's perception of reality 🧠

    • @drewsteinman1898
      @drewsteinman1898 Рік тому

      Q

    • @zainkhalid5393
      @zainkhalid5393 Рік тому +3

      You guys are overthinking it. 😁

    • @gohardorgohome6693
      @gohardorgohome6693 Рік тому

      that's how I interpreted it too - like yeah, the AI knows the banana doesn't want to be mashed by a car, neither do I

    • @l4l01234
      @l4l01234 Рік тому +1

      No, you’re definitely overthinking it. The AI doesn’t think anything because it is incapable of such context like “we are a cruel species that mash anything we come across”. Unless you specifically input that in the prompt, it has no mechanism to even conceive of the phrase.

  • @jonasgk86
    @jonasgk86 Рік тому +4

    Lol, i find the banane joke funny :)

  • @punk3900
    @punk3900 Місяць тому

    This was prophetic. I wonder whether at that time you realized that Transformer would revolutionize the world.

  • @Damodharanjay
    @Damodharanjay 11 місяців тому +1

    Aged like a wine!

  • @BigAsciiHappyStar
    @BigAsciiHappyStar 3 місяці тому

    Why did the attention mechanism NOT cross the road? Because it was paralyzed!😜😁
    BTW did I hear that part correctly near the end of the video?

  • @zzador
    @zzador Рік тому +1

    Transformers: More than meets the eye...

  • @ZelForShort
    @ZelForShort Рік тому +4

    In reference to the summary of an article example, How does that work? How does the program know to summarize the article and not continue it?
    Also, how do you go from language processing to playing chess or other games or functions?

    • @damianliew5243
      @damianliew5243 Рік тому +3

      I'm not a machine learning expert so I can't verify the validity of this answer, but from my POV I think these questions about "how the program... instead of..." is generally dependent on
      1. The actual architecture of the model (in this case, a transformer)
      2. The input data it's based upon (text vs maybe piece type and board position labels for a chessboard)
      3. The output data it's trying to predict (predict a summary text vs predict the next words in an article)
      Because such supervised/semi-supervised learning models learn off labelled data, (to a certain extent, for semi-supervised learning), all the model is really doing is mapping an input to an output. Think of it like a maths graph (which is actually exactly what it is); given a dataset with many points, you'd want to find a "best fit" line that models the rough trend accurately without over or underfitting. Machine models do this but on many axes (due to the use of vectors, some with just an insane number of dimensions).
      Of course there are many other things like hyperparameters, activation functions, loss functions, and nuanced variables to each model architectures, but hopefully this gives you a good understanding of ML in general.

    • @xerxel69
      @xerxel69 Рік тому +2

      A summary is a continuation of the text in that case. Consider a webpage on the internet which has an article and then at the bottom of the page it says, "here is a summary of the key points we learned above" and it goes on to summarise. This is an example of the kind of content the AI is trained on. So as long as you do some Prompt Engineering then you can ask your question in such a way that the answer comes from completing the text! It's like magic! 🙂

    • @andrewnorris5415
      @andrewnorris5415 Рік тому

      @@xerxel69 Yeah, articles do often contain a summary section at the end. Or parts of an essay say, "To summarise so far". Not sure if it can learn this totally unsupervised. Mu guess is summaries are a popular feature - so they will train it specifically to look for them and learn from them i na focused way. Not sure though.

  • @norbertfeurle7905
    @norbertfeurle7905 Рік тому

    Do I get this right, that a transformer is a special case of a state machine, which is designed to learn on, or update it's weights on demand, and is still general enough to cover most data?. Wouldn't an FPGA be optimal to implement such a state machine in flip flop, so that you can generate with 100mhz.

    • @nestorlopez7071
      @nestorlopez7071 Рік тому +1

      It really all boils down to performing matrix multiplications. GPUs are best at that. An FPGA can be a GPU if it wants to (:

  • @tuapuikia
    @tuapuikia Рік тому

    Where can I summon autobot?

  • @rongarza9488
    @rongarza9488 5 місяців тому

    Correct me if I'm wrong but it seems that translating a document would require a human doing Quality Control right before publishing. Transformers are impressive in how close they come to mimicking humans but they seem to be The Great Pretenders. Now, how does that QC step get implemented in real time?

  • @tartariazo5237
    @tartariazo5237 11 місяців тому

    IBM: Next-Level Tech explained.
    Chat: How does he write backwards on that invisible board?

  • @Optimus_Prime_The_Legend_alive
    @Optimus_Prime_The_Legend_alive 2 місяці тому

    I just have to say it
    TRANSFORMERS MORE THEN MEETS THE EYES!

  • @zvxcxczv
    @zvxcxczv Рік тому

    this dude can write reversely. so awsome

    • @andrewnorris5415
      @andrewnorris5415 Рік тому

      ha. it looks the right way around to him. The final image is inverted in the video we see. Fun trick.

  • @festusbojangles7027
    @festusbojangles7027 Рік тому +5

    the joke was just too deep for your puny mind to get

  • @Bond-zj2ku
    @Bond-zj2ku 2 місяці тому

    I do searches for Transferormer in Machine learning.and in my mind same those transformer there and video starts with the same.

  • @emirsahin7167
    @emirsahin7167 3 місяці тому

    Is he writing on reverse so we can see it correctly?

  • @saatvikmangal7994
    @saatvikmangal7994 4 місяці тому

    Latest update on banana humor of AI
    Why did the banana go to the doctor?
    Because it wasn't peeling well! - GPT 3.5 11th January 2024, 23:06 IST

  • @sang-suangam9772
    @sang-suangam9772 2 роки тому +3

    the banana … skidded …

  • @michaelcharlesthearchangel
    @michaelcharlesthearchangel Рік тому

    I don't like people ripping Me off, whether IBM or Google.

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader 2 роки тому +1

    Are you guys open to Guest Speakers

  • @MrofficialC
    @MrofficialC 6 місяців тому

    You do realize the joke about the chicken crossing the road is a suicide joke right? He wanted to get to the other side?

  • @sohambhattacharjee951
    @sohambhattacharjee951 9 місяців тому +1

    Now it can indeed write funny banana jokes!!

  • @samahirrao
    @samahirrao 2 місяці тому

    Indian SME's might be able to create this and be a unicorn. Easily.

  • @animalfrendo
    @animalfrendo Рік тому

    But how does the human write backwards?

  • @dagreatcow
    @dagreatcow Рік тому +3

    Optimus Prime

  • @MikeHowles
    @MikeHowles Рік тому

    I came here to understand how on earth he writes backwards or what camera trickery I am obviously missing, LOL.

    • @IBMTechnology
      @IBMTechnology  11 місяців тому +1

      See ibm.biz/write-backwards

    • @MikeHowles
      @MikeHowles 11 місяців тому

      @@IBMTechnology LOL thanks!!! I suppose it shouldn't surprise me there is a video about that. Very cool and elegant technique.

  • @exploradorexplorador7404
    @exploradorexplorador7404 Рік тому

    The banana joke is an instance of an “anti-joke”… just like the chicken joke.

  • @watherby29
    @watherby29 11 місяців тому

    And with this simple idea the civilization ends. No, kidding, the AI will be so smart, it will leave us alone as we will be like bugs to it.

  • @danhetherington1335
    @danhetherington1335 2 місяці тому

    I dont think the joke was that bad. Picture meatwad from AquaTeen Hunger Force, but very pale beige.

  • @sohailpatel7549
    @sohailpatel7549 8 місяців тому

    Instead of the content I started thinking how this guy writing in opposite direction 😭😂😂 Is this some AI trick or fr?!

  • @calvink.4511
    @calvink.4511 10 місяців тому +1

    They've got better jokes now. 😂

  • @randomcheese1719
    @randomcheese1719 Місяць тому

    it doesn't "come up" with a thing, it regurgitates what it's learned. It's nothing but a copy machine and being made out to be much more than it really is by all the AI hype machine artists.

  • @amudhanbakthavathsalu5308
    @amudhanbakthavathsalu5308 3 місяці тому

    not very descriptive.. it is for those who already are studying deeply about sequencing, encoder decoder etc.

  • @user-jl5gj4mv1z
    @user-jl5gj4mv1z 18 днів тому

    I didnt get it

  • @talhaeneskoksal4893
    @talhaeneskoksal4893 Рік тому +1

    Why do they always translate English sentence to French in every video that explains Transformers :D

  • @vincent_hall
    @vincent_hall Рік тому +1

    Well, jokes are hard.
    Kids take several years to learn how to be funny.

  • @valentingorrin4541
    @valentingorrin4541 5 місяців тому

    I can't concentrate I can't understand how he manages to write backwards

  • @curtisnewton895
    @curtisnewton895 Рік тому +2

    ok but how about a more detailed explanation ?

  • @roodrigato
    @roodrigato 7 місяців тому

    wait, does this guy write backwards?

  • @jayseph9121
    @jayseph9121 7 місяців тому

    are you writing backwards in real time? because if so..... 🤯

    • @IBMTechnology
      @IBMTechnology  7 місяців тому

      See ibm.biz/write-backwards

    • @jayseph9121
      @jayseph9121 7 місяців тому

      @@IBMTechnology one of the few times in my life I wish to be lied to 😂

  • @davejones542
    @davejones542 4 місяці тому

    ask it why did the potato cross the road

  • @robertweekes5783
    @robertweekes5783 Рік тому

    The joke would’ve worked if it was a potato. Pretty close though.

  • @dabrowsa
    @dabrowsa 3 місяці тому

    Did I miss something? This didn't seem to give any clue as to how transformers actually work.

  • @quantarank
    @quantarank 10 місяців тому

    Your skills in writing backwards were really distracting.

    • @IBMTechnology
      @IBMTechnology  10 місяців тому

      See ibm.biz/write-backwards for how it's done

  • @carlowood9834
    @carlowood9834 11 місяців тому

    You didn't really explain anything.

  • @blkscreen15
    @blkscreen15 4 місяці тому

    didn't find it helpful to conceptually understand transformers

  • @zbeast
    @zbeast 2 місяці тому

    To reach the other bunch. chat gpt3.5