Why Recurrent Neural Networks are cursed | LM2

Поділитися
Вставка
  • Опубліковано 21 тра 2024
  • Neural language models, and an explanation of recurrent neural networks
    Support me on Patreon! / vcubingx
    Language Modeling Playlist: • Language Modeling
    3blue1brown series on Transformers: • But what is a GPT? Vi...
    Training Neural Networks: distill.pub/2020/grand-tour/
    Chris Olah's LSTM Blog: colah.github.io/posts/2015-08...
    The source code for the animations can be found here:
    github.com/vivek3141/dl-visua...
    These animation in this video was made using 3blue1brown's library, manim:
    github.com/3b1b/manim
    Sources (includes the entire series): docs.google.com/document/d/1e...
    Chapters
    0:00 Introduction
    1:54 Neural N-Gram Models
    6:03 Recurrent Neural Networks
    11:47 LSTM Cells
    12:22 Outro
    Music (In Order):
    Philanthrope, mommy - embrace chll.to/7e941f72
    Helynt - Hearthome City
    Helynt - Route 10
    GameChops - National Park
    Helynt - Bo-Omb Battlefield
    Helynt - Verdanturf Town
    Follow me!
    Website: vcubingx.com
    Twitter: / vcubingx
    Github: github.com/vivek3141
    Instagram: / vcubingx
    Patreon: / vcubingx

КОМЕНТАРІ • 22

  • @vcubingx
    @vcubingx  Місяць тому +10

    If you enjoyed the video, please consider subscribing!
    Part 3! ua-cam.com/video/lOrTlKrdmkQ/v-deo.html
    A small mistake I _just_ realized is that I say trigram/3-gram for the neural language model when I have 3 words to input, but it's a 4-gram model, not 3 gram, since I'm considering 4 words at a time (including the output word). Hopefully that didn't confuse anyone!

  • @l16h7code
    @l16h7code Місяць тому +9

    Please keep making these machine learning videos. Animations are all we need. They make me 10x easier to understand the concepts.

    • @vcubingx
      @vcubingx  Місяць тому +3

      Thanks! I’ll try my best to:)

    • @ShashankBhatta
      @ShashankBhatta 22 дні тому

      Isn't"attention all we need"

    • @aero-mk9ld
      @aero-mk9ld 12 днів тому

      @@ShashankBhatta

  • @ZalexMusic
    @ZalexMusic Місяць тому +2

    Outstanding work, this series is required LM viewing now, like 3b1b. Also, are you from Singapore? That's the only way I can reconcile good weather meaning high temperature and high humidity 😂

  • @drdca8263
    @drdca8263 Місяць тому +2

    I sometimes wonder how well it would work to take something that was mostly an n-gram model, but which added something that was meant to be like, a poor man’s approximation of the copying heads that have been found in transformers.
    So, like, in addition to looking at “when the previous (n-1) tokens were like this, how often were different following things, the next token?” as in an n-gram model, it would also look at, “previously in this document, did the previous token appear, and if so, what followed it?”, and “in the training data set, for the previous few tokens, how often did this kind of copying strategy do well, and how often did the the plain n-gram strategy do well?” , to weight between those.
    (Oh, and also maybe throw in some “what tokens are correlated just considering being in the same document” to the mix.)
    I imagine that this still wouldn’t even come *close* to GPT2 , but I do wonder how much better it could be than plain n-grams.
    I’m pretty sure it would be *very* fast at inference time, and “training” it would consist of just doing a bunch of counting, which would be highly parallelizable (or possibly counting and then taking a low-rank decomposition of a matrix, for the “correlations between what tokens appear in the same document” part)

    • @vcubingx
      @vcubingx  Місяць тому +1

      I think you've gained a key insight, that the approximation does indeed work. I mean heck, if I was only generating two words, a bigram model would be pretty good too.
      I remember seeing a paper that shows that GPT-2 itself has learnt a bi-gram model inside itself. Given this, it might be fair to say that what you're describing could potentially even be what the LLMs today learn under the hood. I think your description is great though, as it's an interpretable way to see how models make predictions. Maybe a future line of research!

  • @varunmohanraj5031
    @varunmohanraj5031 Місяць тому

    So insightful ‼️

  • @calix-tang
    @calix-tang Місяць тому +4

    Incredible job mfv I look forward to seeing more videos

  • @1XxDoubleshotxX1
    @1XxDoubleshotxX1 Місяць тому +1

    Oh yes Vivek Vivek omg yes

  • @usama57926
    @usama57926 Місяць тому

    Nice video

  • @adithyashanker2852
    @adithyashanker2852 Місяць тому +4

    Music is fire

  • @ml-ok3xq
    @ml-ok3xq Місяць тому

    maybe you can loop around to mamba and explain why it's popular again, what has changed to uncurse the model.

    • @vcubingx
      @vcubingx  Місяць тому +1

      Sure! I wanted to make two follow ups - transformers beyond language and language beyond transformers. In the second part I’d talk about mamba and the future of language modeling

  • @VisibilityO2
    @VisibilityO2 Місяць тому +5

    I am not criticizing your whole hard work but at some point you just messed up like at 7:54 without explaining the sum of weights you were computing it `Ht` and you could say` Backpropagation Through The Time ` as query in the video .
    . Also you could introduce "gated cells" in LSTMS . Long Short Term Memory networks most often rely on a gated cell to track information throughout many time steps.
    And Activation function like 'sigmoid' could be replaced by ' ReLu' and packages like TensorFlow have also preferred it in their documentation.
    But , honestly you've a created a good intermediate class for learning Recurrence .

    • @vcubingx
      @vcubingx  Місяць тому +2

      Hey, thanks for the feedback.
      I personally found little value in mentioning BPTT, as I felt like it would confuse the viewer more in case they weren't familiar with backpropagation. The algorithm itself is pretty straightforward, and I personally felt like it didn't need an entire section explaining it.
      In response to LSTMs, the video wasn't meant to cover LSTMs at all. I last-minute introduced the section towards the end for curious viewers. I appreciate you talking about them though! I plan on making a short 5-7 minute video on them in the future.

  • @BooleanDisorder
    @BooleanDisorder Місяць тому +1

    RNN = Remember Nothing Now

    • @vcubingx
      @vcubingx  Місяць тому +2

      Hahaha, RNNs did indeed have "memory loss" issues :)