Here is how Transformers ended the tradition of Inductive Bias in Neural Nets

Поділитися
Вставка

КОМЕНТАРІ • 16

  • @TP-ct7qm
    @TP-ct7qm 8 місяців тому +4

    Awesome video! This (together with the last two videos) is one of the best explanations of Transformers I've seen. Thanks and keep it up!

  • @hieunguyentranchi947
    @hieunguyentranchi947 7 місяців тому +3

    This is gold I hope it gets the ATTENTION it deserves

    • @avb_fj
      @avb_fj  7 місяців тому +1

      Thanks!! More attention will surely TRANSFORM this channel! 😂

  • @AI_ML_DL_LLM
    @AI_ML_DL_LLM 8 місяців тому +2

    the gist of this video: 4:29, a great job, thanks

  • @JasFox420
    @JasFox420 8 місяців тому +3

    Dude, you are a treasure, keep it up!

  • @IdPreferNot1
    @IdPreferNot1 3 місяці тому +1

    Came here after interest from one blue three brown. It's clear you've got a great explanation style... plus you were earlier ;). Hope your channel following builds to match your outstanding quality.

    • @avb_fj
      @avb_fj  3 місяці тому

      Welcome! Thanks a lot for the shoutout!

  • @amoghjain
    @amoghjain 8 місяців тому

    wowww! what a great explanation! helps knit so many individual concepts together in one cohesive knowledge base!! thanks a lot for making this video and all the animations!

    • @avb_fj
      @avb_fj  8 місяців тому

      Thanks!!

  • @matiasalonso6430
    @matiasalonso6430 8 місяців тому +1

    Congrats !! Awesome channel !

    • @avb_fj
      @avb_fj  8 місяців тому

      Thanks!

  • @GabrielAnguitaVeas
    @GabrielAnguitaVeas 4 місяці тому +1

    Thank you!

  • @user-wm8xr4bz3b
    @user-wm8xr4bz3b 2 місяці тому

    at 2:33, you mentioned that self-attention is more biased, but at 2:54 you also mentioned that self-attention reduces inductive bias?? Sorry but i'm a bit confused.

    • @avb_fj
      @avb_fj  2 місяці тому

      Self-Attention indeed reduces inductive bias and adopts a more general learning framework. At 2:33, I am asking a question: "IS Self-Attention more general or more biased?" And then I continue with "I'll argue that Self-Attention is not only more general than CNNs and RNNs but even more general than MLP layers".

  • @sahhaf1234
    @sahhaf1234 7 місяців тому +2

    I really dont like to leave you a like. Instead, I want to leave you one hundred likes.. Unfortunately google limits me to one...

    • @avb_fj
      @avb_fj  7 місяців тому

      Thanks!! Super appreciated!