Transformer Encoder vs LSTM Comparison for Simple Sequence (Protein) Classification Problem

Поділитися
Вставка
  • Опубліковано 30 вер 2024
  • The purpose of this video is to highlight results comparing a single Transformer Encoder layer to a single LSTM layer for a very simple problem. Several texts on Natural Language Processing describe the power of LSTM as well as the advanced sequence processing capabilities of Self Attention and the Transformer. This video offers very simple results in support of these notions in the field of Natural Language Processing.
    Previous Video:
    • A Very Simple Transfor...
    Code:
    github.com/Bra...
    Interesting Post:
    ai.stackexchan...
    Music Credits:
    Breakfast in Paris by Alex-Productions | onsound.eu/
    Music promoted by www.free-stock...
    Creative Commons / Attribution 3.0 Unported License (CC BY 3.0)
    creativecommon...
    Small Town Girl by | e s c p | www.escp.space
    escp-music.ban...

КОМЕНТАРІ • 3

  • @LeoDaLionEdits
    @LeoDaLionEdits 3 місяці тому

    I never knew that transformers were that much more time efficient at large embedding sizes

    • @lets_learn_transformers
      @lets_learn_transformers  3 місяці тому +1

      Hey @LeoDaLionEdits - I'm very interested in ideas like these. I unfortunately lost my link to the paper - but there was an interesting arXiv article on why XGBoost still dominates Kaggle competitions in comparison to Deep Neural Networks. Based on the problem, I think RNN / LSTM may often be more competitive in the same way: the simpler, tried-and-true model winning out. From a performance perspective, this book notes the advantage in parallel processing of transformers in sections 10.1 (intro) and 10.1.4 (parallelizing self-attention): web.stanford.edu/~jurafsky/slp3/ed3book.pdf

  • @Pancake-lj6wm
    @Pancake-lj6wm 3 місяці тому

    Zamm!