PyTorch Time Sequence Prediction With LSTM - Forecasting Tutorial

Поділитися
Вставка
  • Опубліковано 3 сер 2024
  • In this Python Tutorial we do time sequence prediction in PyTorch using LSTMCells.
    ⭐ Check out Tabnine, the FREE AI-powered code completion tool I used in this Tutorial: www.tabnine.com/?... *
    ✅ Write cleaner code with Sourcery, instant refactoring suggestions in VS Code & PyCharm: sourcery.ai/?... *
    The neural network learns sine wave signals and tries to predict the signal values in the future.
    Get my Free NumPy Handbook:
    www.python-engineer.com/numpy...
    If you enjoyed this video, please subscribe to the channel: / @patloeber
    Code was taken and adapted from official examples repo:
    github.com/pytorch/examples
    Timeline:
    00:00 - Intro
    01:30 - Sine Wave Creation
    06:30 - LSTM Model
    15:50 - Training Loop
    27:55 - Final Testing
    ~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~
    🖥️ Website: www.python-engineer.com
    🐦 Twitter - / patloeber
    📸 Instagram - / patloeber
    🦾 Discord: / discord
    💻 GitHub: github.com/patrickloeber
    ~~~~~~~~~~~~~~ SUPPORT ME ~~~~~~~~~~~~~~
    🅿 Patreon - / patrickloeber
    Music: www.bensound.com/
    #Python #PyTorch
    ----------------------------------------------------------------------------------------------------------
    * This is a sponsored link or an affiliate link. By clicking on it I may receive a provision (dependent on the link). You will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏

КОМЕНТАРІ • 59

  • @patloeber
    @patloeber  3 роки тому +6

    Finally a new PyTorch tutorial. I hope you enjoy it :)
    Also, check out Tabnine, the FREE AI-powered code completion tool I used in this video: www.tabnine.com/?.com&PythonEngineer *
    ----------------------------------------------------------------------------------------------------------
    * This is a sponsored link. You will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏

  • @iEdp526_01
    @iEdp526_01 2 роки тому +5

    Thank you for making this, I've been struggling with this stuff on and off for months. These videos on PyTorch made things click, I really appreciate you taking the time to make them. They've helped me immensly.

  • @yunlongsong7618
    @yunlongsong7618 2 роки тому +1

    this is an amazing tutorial. thanks a lot for putting the effort. great job.

  • @LanTranLe-sk9cn
    @LanTranLe-sk9cn 2 роки тому

    Thank you so much. I found it very helpful!

  • @anaximeno
    @anaximeno 2 роки тому +1

    Thank you, this is video helped me to understand how to use LSTM on Pytorch.

  • @CodeWithTomi
    @CodeWithTomi 3 роки тому +1

    Great!... Another Pytorch Tutorial.

  • @saurrav3801
    @saurrav3801 3 роки тому

    Good to see you again bro 🥺🔥

  • @patrickningi4259
    @patrickningi4259 3 роки тому

    great content as always

  • @amiralioghli8622
    @amiralioghli8622 8 місяців тому

    Hi, thank you for sharing your valuable information through this channel. I am one of the new followers of time series. If possible, could you create a series on how to implement Transformers on time series data, covering both univariate and multivariate approaches? Focusing on operations like forecasting, classification, or anomaly detection-just one of these would be greatly appreciated. There are no videos available on UA-cam that have implemented this before. It would be extremely helpful for students and new researchers in the field of time series.

  • @sciencei7saan459
    @sciencei7saan459 Рік тому

    Thanx ....great job.

  • @fadoobaba
    @fadoobaba 3 місяці тому

    many thanks!

  • @cwumin2105
    @cwumin2105 2 роки тому +2

    Hi Python Engineer, may I know how to do if we want to predict multiple steps instead of one step ahead? Hopefully you can show an example. thanks

  • @saadouch
    @saadouch 2 роки тому

    thanks boss!

  • @scottk5083
    @scottk5083 3 роки тому +2

    Amazing content! Although, quick question. I noticed you called 'self.hidden' at 29:48. However I didnt see a corresponding parameter to self.hidden i.e self.n_hidden has n_hidden parameters, while i cant see the number of parameters for self.hidden

  • @ansumandas5749
    @ansumandas5749 3 роки тому +2

    Please make a video on the batch size , sequence length and input size and how they actually are fed to the machine

    • @patloeber
      @patloeber  3 роки тому +1

      thanks for the idea!

  • @maxmohamed9878
    @maxmohamed9878 3 роки тому

    Well explained. Please keep doing the great work that you are doing

  • @PWK95
    @PWK95 3 роки тому +2

    This is amazing! How do you always know exactly what I need and make a tutorial about it?
    Any chance you could make a tutorial about how to make an estimator that can give out the width of the given sine function and the x-shift of the 3 sine functions relative to each other? That would quite literally save my life. I know it should be possible to do with a similar method employed in the video, but I just can't do it...

  • @grdev3066
    @grdev3066 3 роки тому +2

    Hi, great video! Just want to ask, why do we have 2 Lstm cells, and not a single one? And not sure if I get it... in the forward() func we split samples by dim=1 to feed a sequence of elements right? So if target_inputs has say 1000 elements(columns in this case) it means, that our lstm knows what happened 1000 points behind and "use" all of them to make the very next prediction? Thanks!

    • @rpcruz
      @rpcruz Рік тому +1

      It could be a single LSTM cell. He just wanted to make it deeper.
      He split the tensor in the axis of the sequence to process each time step at each for loop.

  • @pietheijn-vo1gt
    @pietheijn-vo1gt 2 роки тому

    Hi. What method are you using to predict future samples? As I understand there are multiple methods

  • @JonasBalandraux
    @JonasBalandraux Місяць тому

    Is it better for prediction performance to pass the output of one LSTM to the next or to pass the previous hidden state (as done in the video)? I've seen both methods used and don't know which is better. Do you have any advice on when to use each approach?

  • @greatsaid5271
    @greatsaid5271 3 роки тому

    your videos are amazing, thanks a lot 🙌

  • @SP-db6sh
    @SP-db6sh 3 роки тому

    Plz post a video on quick start guide in torchmeta !

  • @regularviewer1682
    @regularviewer1682 Рік тому

    Hey! I was wondering why are there multiple colors at the end when at the start there was only 1 sine wave? I'm confused where all the additional red and green lines came from.
    Thanks :)

  • @anilkumar-yd1rd
    @anilkumar-yd1rd 3 роки тому

    Can you please guide me on high level on how do I implement the same work for MLP using Pytorch Lightning

  • @user-wg8rh7oh4b
    @user-wg8rh7oh4b 3 місяці тому

    Would be great if you could just import torch first 👀

  • @aimenmalik8929
    @aimenmalik8929 2 роки тому

    hey!! may i know why we are give x as input, (x.split), why not y.split??? because our sine wave is basically in variable y.

  • @DanielWeikert
    @DanielWeikert 2 роки тому

    Can you do a video using Transformers for time series? Have not found anything useful on yt so far. Thanks and br

  • @teetanrobotics5363
    @teetanrobotics5363 3 роки тому

    please put in a playlist

  • @wollmonsterchen
    @wollmonsterchen Рік тому

    Thanks for the helpful video. Is the code on github? I didn't find it and it would be very helpful to play around a little bit

  • @DanielWeikert
    @DanielWeikert 3 роки тому

    can you dive deeper into the various pytorch package functions in a future video?
    e.g. detach vs item, .Tensor, .tensor when to use datatype longtensor, ...?
    Thanks and best

    • @patloeber
      @patloeber  3 роки тому

      Thanks for the suggestions! Will think about this :)

  • @messedmushroom
    @messedmushroom Рік тому

    Would we not want to initialize the hidden state and cell state outside of the forward, so they capture long term features? Since they are in forward, aren't we removing all notions of long-term connectivity as they get cleared on every forward call?

    • @rpcruz
      @rpcruz Рік тому

      Usually, you only want the LSTM to keep the memory during the sequence. For example, if I have a LSTM that recognizes activity in videos, then I want it to keep the memory while processing the frames in one video, but then I want it to forget it for the next video.

  • @rohanmenon9160
    @rohanmenon9160 Рік тому

    idk i am a beginner and i use jupyter notebooks and i copied the code perfectly( after running no errors) but i did not get any predictions or loss ? any idea what must be the case?

  • @hannahw115
    @hannahw115 2 роки тому

    Don't you "destroy" some of the knowledge learned during training by initialising the hidden and cell state as zeros in each forward pass? Or is this a better approach than initialising the states once in the beginning? Maybe you could elaborate on that? :)

  • @MrEmbrance
    @MrEmbrance 3 роки тому +13

    you explained nothing

  • @doctormaddix2143
    @doctormaddix2143 Рік тому

    I am trying to run this code on my gpu. It should work, but it doesn't. device = torch.device("cuda" if torch.cuda.is_available() else "cpu") returns 'cuda', so my GPU is being detected. I also copied the training and test inputs and targets to the gpu with .to(device) as well as the model (model = LSTMPredictor(hiddenstates).to(device)). But i still get the error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_mm). It occurs in the optimizer step (optimizer.step(closure)). What do you think?

  • @oscar_lares
    @oscar_lares 2 роки тому

    Thanks for this video. Such a great help and cleared up some confusion.. One question I had was, for the training, why are you only using the values from y and not the x?

  • @yabindong1754
    @yabindong1754 3 роки тому

    Why predict the sequence one by one? Can you treat each sequence as a feature and predict them at the same time?

    • @Johncowk
      @Johncowk 3 роки тому +2

      LSTMs are recurrent networks: you need the result of the previous iteration to get the next.
      That's the way they work, and also one of their main weakness.

  • @amirsoltanpoor7421
    @amirsoltanpoor7421 2 роки тому

    When I run this I get this error:
    'Tensor' object has no attribute 'append'

  • @ahmadalghooneh2105
    @ahmadalghooneh2105 2 роки тому

    shouldn't it be h_t2 and c_t2 for self.lstm2?

  • @Omkar-ey3ls
    @Omkar-ey3ls 3 роки тому

    why did we call the super () inside the LSTM predictor class ? is there any reason for this ?

    • @patloeber
      @patloeber  3 роки тому +1

      Yes, we have to do this to initialize the super class correctly (this is a basic thing to do in object oriented programming in Python)

  • @ludmiladeute6353
    @ludmiladeute6353 3 роки тому

    I tried to run this code and it's not working. Where can I find the file?

    • @patloeber
      @patloeber  3 роки тому

      You need one of the latest Pytorch versions for this. The link to repo is in the video description

  • @gopikrishnan5206
    @gopikrishnan5206 3 роки тому

    Can you do a tutorial on python data analysis and visualization covering numpy, pandas and matplotlib libraries.

  • @GoForwardPs34
    @GoForwardPs34 Рік тому

    where is the code

  • @conscofd3534
    @conscofd3534 2 роки тому +1

    I understand the fact that your videos are "code along" style ones BUT for the implementation, there is too much from the HOW and saddly, nothing from the WHY.

  • @sreesankar07
    @sreesankar07 3 роки тому

    hello I am a beginner python programmer .Can you please make video on DSA