Stanford CS25: V4 I Overview of Transformers

Поділитися
Вставка
  • Опубліковано 22 кві 2024
  • April 4, 2024
    Steven Feng, Stanford University [styfeng.github.io/]
    Div Garg, Stanford University [divyanshgarg.com/]
    Emily Bunnapradist, Stanford University [ / ebunnapradist ]
    Seonghee Lee, Stanford University [shljessie.github.io/]
    Brief intro and overview of the history of NLP, Transformers and how they work, and their impact. Discussion about recent trends, breakthroughs, applications, and remaining challenges/weaknesses. Also discussion about AI agents. Slides here: docs.google.com/presentation/...
    More about the course can be found here: web.stanford.edu/class/cs25/
    View the entire CS25 Transformers United playlist: • Stanford CS25 - Transf...

КОМЕНТАРІ • 37

  • @fatemehmousavi402
    @fatemehmousavi402 Місяць тому +7

    Awesome, thank you Stanford online for sharing these amazing video series

  • @Drazcmd
    @Drazcmd Місяць тому +5

    Very cool! Thanks for posting this publicly, it's really awesome to be able to audit the course :)

  • @3ilm_yanfa3
    @3ilm_yanfa3 Місяць тому +11

    Can't believe, ... Just today, we started the part about LSTM and transformers in my ML course, and here it comes
    Thank you guys !

  • @benjaminy.
    @benjaminy. Місяць тому +2

    Hello Everyone! Thank you very much for uploading these materials. Cheers

  • @mjavadrajabi7401
    @mjavadrajabi7401 Місяць тому +5

    Great!! Finally It's time for CS25 V4🔥

  • @marcinkrupinski
    @marcinkrupinski Місяць тому +3

    AMazing stuff! Thank you for publishing this valuable material!

  • @lebesguegilmar1
    @lebesguegilmar1 Місяць тому +2

    Thanks for sharing this course and palestry Staford. Congratulations . Here the Brazil

  • @JJGhostHunters
    @JJGhostHunters Місяць тому +1

    I recently started to explore using transformers for timeseries classification as opposed to NLP. Very excited about this content!

  • @styfeng
    @styfeng Місяць тому +17

    it's finally released! hope y'all enjoy(ed) the lecture 😁

    • @laalbujhakkar
      @laalbujhakkar Місяць тому

      Don't hold the mic so close bro. The lecture was really good though :)

    • @gemini22581
      @gemini22581 Місяць тому

      What is a good course to learn NLP?

    • @siiilversurfffeeer
      @siiilversurfffeeer Місяць тому

      hi feng! will there be more cs25 v4 lectures upload in this channel?

    • @styfeng
      @styfeng Місяць тому +1

      @@siiilversurfffeeer yes! should be a new video out every week, approx. 2-3 weeks after each lecture :)

  • @liangqunlu1553
    @liangqunlu1553 Місяць тому +2

    Very interesting summarization

  • @RishiKaura
    @RishiKaura Місяць тому +1

    Sincere students and smart

  • @GeorgeMonsour
    @GeorgeMonsour Місяць тому

    I want to know more about 'filters.' Are they human or computer processes or mathematical models? The filters are a reflection, I'd like to understand more about. I hope they are not an inflection, that would be an unconscious pathway.
    This is a really sweet dip into the currency of knowledge and these students are to be commended however, in the common world there is a tendency developing towards a 'tower of babel'.
    Greed may have an influence that we must be wary of. I heard some warnings in the presentation that consider this tendency.
    I'm impressed by these students. I hope they aren't influenced by the silo system of capitalism and that they remain at the front of the generalization and commonality needed to keep bad actors off the playing field.

  • @IamPotato_007
    @IamPotato_007 Місяць тому

    Where are the professors?

  • @Anbu_Sampath
    @Anbu_Sampath Місяць тому

    it would be great if CS25: V4 created another playlist in youtube.

  • @GerardSans
    @GerardSans Місяць тому +27

    Be careful using anthropomorphic language when talking about LLMs. Eg: thoughts, ideas, reasoning. Transformers don’t “reason” or have “thoughts” or even “knowledge”. They extract existing patterns in the training data and use stochastic distributions to generate outputs.

    • @ehza
      @ehza Місяць тому +2

      That's a pretty important observation imo

    • @junyuzheng5282
      @junyuzheng5282 Місяць тому +3

      Then what is “reason” “thoughts” “knowledge”?

    • @DrakenRS78
      @DrakenRS78 Місяць тому +1

      Do individual Neurons have thoughts , reason or knowledge - or is it once again the collective which we should be assessing

    • @TheNewton
      @TheNewton 29 днів тому

      This mis-anthropomorphism problem will only grow because each end of the field/industry is being sloppy with it , so calls for sanity will just get derided as time goes on.
      On the starting side we have academics title baiting like they did with "attention" so papers get attention instead of just making a new word|phrase like 'correlation network' , 'word window' , 'hyper hyper-networks' etc ; or just overloading existing terms 'backtracking', backpropagation etc.
      And the on other end of the collective full court press is corporations continuing to pass assistant(tools) off as human like with names such cortana, siri etc for the sake of branding and marketing.

    • @TheNewton
      @TheNewton 29 днів тому

      @@junyuzheng5282 `Then what is “reason” “thoughts” “knowledge”?`
      reason,thoughts,knowledge, etc are more than are hallucinated in your linear algebra formulas

  • @TV19933
    @TV19933 Місяць тому

    future artificial intelligence
    i was into talk this
    probability challenge
    Gemini ai talking ability rapid talk i suppose so
    it's splendid

  • @hussienalsafi1149
    @hussienalsafi1149 Місяць тому +1

    ☺️☺️☺️🥰🥰🥰

  • @riju1956
    @riju1956 Місяць тому +6

    so they stand for 1 hour

    • @rockokechukwu3343
      @rockokechukwu3343 Місяць тому

      Is it okay to cheat in an exam if you have the opportunity to do so?

  • @ramsever5087
    @ramsever5087 Місяць тому

    what is said in 13:47 is incorrect.
    Large language models like ChatGPT or other state-of-the-art language models do not only have a decoder in their architecture. They employ the standard transformer encoder-decoder architecture. The transformer architecture used in these large language models consists of two main components:
    The Encoder:
    This encodes the input sequence (prompt, instructions, etc.) into vector representations.
    It uses self-attention mechanisms to capture contextual information within the input sequence.
    The Decoder:
    This takes in the encoded representations from the encoder.
    It generates the output sequence (text) in an autoregressive manner, one token at a time.
    It uses self-attention over the already generated output, as well as cross-attention over the encoder's output, to predict the next token.
    So both the encoder and decoder are critical components. The encoder allows understanding and representing the input, while the decoder enables powerful sequence generation capabilities by predictively modeling one token at a time while attending to the encoder representations and past output.
    Having only a decoder without an encoder would mean the model can generate text but not condition on or understand any input instructions/prompts. This would severely limit its capabilities.
    The transformer's encoder-decoder design, with each component's self-attention and cross-attention, is what allows large language models to understand inputs flexibly and then generate relevant, coherent, and contextual outputs. Both components are indispensable for their impressive language abilities.

    • @gleelantern
      @gleelantern 10 днів тому

      ChatGPT, Gemini, etc. are decoder-only models. Read their tech reports.

  • @laalbujhakkar
    @laalbujhakkar Місяць тому +2

    Stanford's struggles with microphones continue.

    • @jeesantony5308
      @jeesantony5308 Місяць тому +1

      it is cool to see some negative comments in between lots of pos... ✌🏼✌🏼

    • @laalbujhakkar
      @laalbujhakkar Місяць тому

      @@jeesantony5308 I love the content, which makes me h8 the lack of thought and preparation that went into the delivery of all that knowledge even more. Just trying to reduce the loss as it were.