Transformer Attention for Time Series - Follow-Up with Real World Data

Поділитися
Вставка
  • Опубліковано 22 жов 2024

КОМЕНТАРІ •

  • @Stacker22
    @Stacker22 4 місяці тому +1

    Love the video's and your presentation style!

  • @alisaghi051
    @alisaghi051 Місяць тому

    Hi. Thanks for the videos that are very helpful. Why don't we use a dynamic tokenazation instead of a fixed one. I think here you have chosen that your tokens are always a 200 datapoints vector. But, in the real world data, for example in Spectroscopy analysis, the signal consists of certain peaks in different spectral ranges. I think if we come with a dynamic tokenization like the one they are using in text analysis, the results will be more intersting.

  • @elmo.juanara
    @elmo.juanara 4 місяці тому +1

    Thank you for your knowledge sharing. Can the code run on the jupyter notebook as well?

    • @lets_learn_transformers
      @lets_learn_transformers  4 місяці тому +1

      Thanks @elmojuanara5628! The code should run just fine in a notebook - some additional work may be required based on GPU availability of the notebook, but I believe some services such as Colab handle this very well for CUDA.

  • @rariwa
    @rariwa 3 місяці тому

    do you have solution for long seq to seq problem? I need to predict seq of vector (15k size) for lets say 100 steps ahead. NN.linear always gives me memory error.

  • @AtousaKalantari-y4w
    @AtousaKalantari-y4w 3 місяці тому +1

    Why didn’t you use positional encoding?

    • @lets_learn_transformers
      @lets_learn_transformers  3 місяці тому +1

      Hi @AtousaKalantari-y4w, in later videos I began using positional encoding as implemented in PyTorch. However, in this video and the one prior, I used only vanilla attention. Positional encoding is a general improvement and I believe that it should be used in almost all cases!