Hi. Thanks for the videos that are very helpful. Why don't we use a dynamic tokenazation instead of a fixed one. I think here you have chosen that your tokens are always a 200 datapoints vector. But, in the real world data, for example in Spectroscopy analysis, the signal consists of certain peaks in different spectral ranges. I think if we come with a dynamic tokenization like the one they are using in text analysis, the results will be more intersting.
Thanks @elmojuanara5628! The code should run just fine in a notebook - some additional work may be required based on GPU availability of the notebook, but I believe some services such as Colab handle this very well for CUDA.
do you have solution for long seq to seq problem? I need to predict seq of vector (15k size) for lets say 100 steps ahead. NN.linear always gives me memory error.
Hi @AtousaKalantari-y4w, in later videos I began using positional encoding as implemented in PyTorch. However, in this video and the one prior, I used only vanilla attention. Positional encoding is a general improvement and I believe that it should be used in almost all cases!
Love the video's and your presentation style!
Thank you!
Hi. Thanks for the videos that are very helpful. Why don't we use a dynamic tokenazation instead of a fixed one. I think here you have chosen that your tokens are always a 200 datapoints vector. But, in the real world data, for example in Spectroscopy analysis, the signal consists of certain peaks in different spectral ranges. I think if we come with a dynamic tokenization like the one they are using in text analysis, the results will be more intersting.
Thank you for your knowledge sharing. Can the code run on the jupyter notebook as well?
Thanks @elmojuanara5628! The code should run just fine in a notebook - some additional work may be required based on GPU availability of the notebook, but I believe some services such as Colab handle this very well for CUDA.
do you have solution for long seq to seq problem? I need to predict seq of vector (15k size) for lets say 100 steps ahead. NN.linear always gives me memory error.
Why didn’t you use positional encoding?
Hi @AtousaKalantari-y4w, in later videos I began using positional encoding as implemented in PyTorch. However, in this video and the one prior, I used only vanilla attention. Positional encoding is a general improvement and I believe that it should be used in almost all cases!