Mixture-of-Depths: Dynamically allocating compute in transformer-based language models

Поділитися
Вставка
  • Опубліковано 14 лис 2024

КОМЕНТАРІ • 20

  • @raraismature
    @raraismature 7 місяців тому +2

    awesome content, seriously

  • @toatoa10
    @toatoa10 7 місяців тому +1

    great video! this is much easier to understand than just reading the paper. what app are you using for annotating the paper and making notes?

    • @gabrielmongaras
      @gabrielmongaras  6 місяців тому +1

      Thanks! Glad you found my video helpful! I'm using the default Samsung Notes app to make all the annotations and notes.

  • @gauravsrivastava9428
    @gauravsrivastava9428 7 місяців тому +1

    Thanks for the video tutorial, it is really helpful! At time 24:08, when you mention about softmax, do you mean a softmax is done to compute the routing scalars? If yes, then as per my understanding they don't compute routing scalars using softmax. The scalars are computed just by doing an inner product of token with routing weights vector.

    • @gabrielmongaras
      @gabrielmongaras  7 місяців тому

      Oh yes, I see what you're talking about. On page 6, right above equation (1), they mention the rth weight is computed as the inner product between the weight and the vector, which is different from normal MoE. I suppose this fixes the gradient problem I was talking about. Thanks for the clarification!

  • @ml-ok3xq
    @ml-ok3xq 7 місяців тому

    i thought people theorise that transformers still use the 'slack' tokens for other purposes, so the compute is not wasted, i guess this shows that maybe those theories needed to be rigorously tested. although actually since they only sandwich the layers maybe it is fully used. this method effectively gives some tokens up to double the mixing time

  • @ckpioo
    @ckpioo 7 місяців тому

    awsome, btw maybe try using excalibur

  • @DiogoNeves
    @DiogoNeves 7 місяців тому

    Im not sure I understand, even though the sigmoids are independent, why would it allow for causal sampling if it was trained to mimic a distribution that isn’t causal? It carries information from the future albeit indirectly no?
    For example, if we were training on a distribution of a biased lottery, we would still be predicting the future from just some of the tokens?

    • @DiogoNeves
      @DiogoNeves 7 місяців тому

      Ah, I think you mention exactly that afterwards 😅 thanks

    • @DiogoNeves
      @DiogoNeves 7 місяців тому

      One more question, can these be added to existing models and trained separately? From the description sounds like it’s possible

    • @gabrielmongaras
      @gabrielmongaras  7 місяців тому +1

      I don't think they talked about doing that in the paper. My intuition says it may be hard and probably wouldn't work as well as we might hope. The activations for attention are whatever it needs to do the attention mechanism. However, in this paper, the activations are also used for ranking. My first thought is that these two activation distributions are quite different, making the model start from a poor state. I wonder if Google did something like this, but found it didn't work that well and decided not to add it in the paper? Would be totally worth trying if you have the compute though! Maybe you could start off with initializing routing to all tokens and slowly decrease this during fine-tuning.

  • @Stan-san
    @Stan-san 7 місяців тому +2

    Why use lot words when few words do trick?

    • @gabrielmongaras
      @gabrielmongaras  7 місяців тому

      Yeah, definitely a problem I have 😅
      Been trying to get better at it, and realized I could've explained the extra loss part in much fewer words after uploading. In general, sometimes it's hard to know if the explanation given is satisfying or not when trying to balance conciseness and length.

    • @rykim4626
      @rykim4626 3 місяці тому

      ⁠@@gabrielmongarasthey might be referring to mixture of depths using only few of the words. Personally, I thought your explanations were great

  • @theatheistpaladin
    @theatheistpaladin 7 місяців тому

    What field of math do you need to understand this?

    • @gabrielmongaras
      @gabrielmongaras  7 місяців тому

      Just an understanding of machine learning models at a high level and how transformers work. The experts themselves are just a linear layer or feed forward layer in MoE and the single expert in this paper is a transformer layer.

    • @MrNathanShow
      @MrNathanShow 7 місяців тому +2

      I'd add that a basic understanding of statistics can help with some introductory degree of calculus. But for the most part there is more trial and error for these discoveries than you might not believe. The understanding comes after sometimes ;)

    • @jaredtweed7826
      @jaredtweed7826 7 місяців тому +1

      ​@@gabrielmongaras what do you think helped you best understand neural networks? I have a shallow understanding of how transformers work. I know how the encoder works, but I don't really understand the decoder fully. I also know pytorch only well enough to build simple convolutional neural networks. I also have a really strong understanding of calculus and linear algebra.

    • @tgugdevil
      @tgugdevil 7 місяців тому +1

      Calculus and Linear Algebra.

    • @jaredtweed7826
      @jaredtweed7826 7 місяців тому

      @@tgugdevil thank you, sorry, I forgot to mention that I already have a strong understanding of those concepts