PyTorch Time Sequence Prediction With LSTM - Forecasting Tutorial
Вставка
- Опубліковано 3 сер 2024
- In this Python Tutorial we do time sequence prediction in PyTorch using LSTMCells.
⭐ Check out Tabnine, the FREE AI-powered code completion tool I used in this Tutorial: www.tabnine.com/?... *
✅ Write cleaner code with Sourcery, instant refactoring suggestions in VS Code & PyCharm: sourcery.ai/?... *
The neural network learns sine wave signals and tries to predict the signal values in the future.
Get my Free NumPy Handbook:
www.python-engineer.com/numpy...
If you enjoyed this video, please subscribe to the channel: / @patloeber
Code was taken and adapted from official examples repo:
github.com/pytorch/examples
Timeline:
00:00 - Intro
01:30 - Sine Wave Creation
06:30 - LSTM Model
15:50 - Training Loop
27:55 - Final Testing
~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~
🖥️ Website: www.python-engineer.com
🐦 Twitter - / patloeber
📸 Instagram - / patloeber
🦾 Discord: / discord
💻 GitHub: github.com/patrickloeber
~~~~~~~~~~~~~~ SUPPORT ME ~~~~~~~~~~~~~~
🅿 Patreon - / patrickloeber
Music: www.bensound.com/
#Python #PyTorch
----------------------------------------------------------------------------------------------------------
* This is a sponsored link or an affiliate link. By clicking on it I may receive a provision (dependent on the link). You will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
Finally a new PyTorch tutorial. I hope you enjoy it :)
Also, check out Tabnine, the FREE AI-powered code completion tool I used in this video: www.tabnine.com/?.com&PythonEngineer *
----------------------------------------------------------------------------------------------------------
* This is a sponsored link. You will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
Thank you for making this, I've been struggling with this stuff on and off for months. These videos on PyTorch made things click, I really appreciate you taking the time to make them. They've helped me immensly.
this is an amazing tutorial. thanks a lot for putting the effort. great job.
Thank you so much. I found it very helpful!
Thank you, this is video helped me to understand how to use LSTM on Pytorch.
glad it was helpful!
Great!... Another Pytorch Tutorial.
Hope you like it!
Good to see you again bro 🥺🔥
Yeah :)
great content as always
Glad you enjoyed it
Hi, thank you for sharing your valuable information through this channel. I am one of the new followers of time series. If possible, could you create a series on how to implement Transformers on time series data, covering both univariate and multivariate approaches? Focusing on operations like forecasting, classification, or anomaly detection-just one of these would be greatly appreciated. There are no videos available on UA-cam that have implemented this before. It would be extremely helpful for students and new researchers in the field of time series.
Thanx ....great job.
many thanks!
Hi Python Engineer, may I know how to do if we want to predict multiple steps instead of one step ahead? Hopefully you can show an example. thanks
thanks boss!
Amazing content! Although, quick question. I noticed you called 'self.hidden' at 29:48. However I didnt see a corresponding parameter to self.hidden i.e self.n_hidden has n_hidden parameters, while i cant see the number of parameters for self.hidden
Please make a video on the batch size , sequence length and input size and how they actually are fed to the machine
thanks for the idea!
Well explained. Please keep doing the great work that you are doing
Thanks a lot!
This is amazing! How do you always know exactly what I need and make a tutorial about it?
Any chance you could make a tutorial about how to make an estimator that can give out the width of the given sine function and the x-shift of the 3 sine functions relative to each other? That would quite literally save my life. I know it should be possible to do with a similar method employed in the video, but I just can't do it...
Hi, great video! Just want to ask, why do we have 2 Lstm cells, and not a single one? And not sure if I get it... in the forward() func we split samples by dim=1 to feed a sequence of elements right? So if target_inputs has say 1000 elements(columns in this case) it means, that our lstm knows what happened 1000 points behind and "use" all of them to make the very next prediction? Thanks!
It could be a single LSTM cell. He just wanted to make it deeper.
He split the tensor in the axis of the sequence to process each time step at each for loop.
Hi. What method are you using to predict future samples? As I understand there are multiple methods
Is it better for prediction performance to pass the output of one LSTM to the next or to pass the previous hidden state (as done in the video)? I've seen both methods used and don't know which is better. Do you have any advice on when to use each approach?
your videos are amazing, thanks a lot 🙌
Glad you like them!
Plz post a video on quick start guide in torchmeta !
Hey! I was wondering why are there multiple colors at the end when at the start there was only 1 sine wave? I'm confused where all the additional red and green lines came from.
Thanks :)
Can you please guide me on high level on how do I implement the same work for MLP using Pytorch Lightning
Would be great if you could just import torch first 👀
hey!! may i know why we are give x as input, (x.split), why not y.split??? because our sine wave is basically in variable y.
Can you do a video using Transformers for time series? Have not found anything useful on yt so far. Thanks and br
please put in a playlist
Thanks for the helpful video. Is the code on github? I didn't find it and it would be very helpful to play around a little bit
can you dive deeper into the various pytorch package functions in a future video?
e.g. detach vs item, .Tensor, .tensor when to use datatype longtensor, ...?
Thanks and best
Thanks for the suggestions! Will think about this :)
Would we not want to initialize the hidden state and cell state outside of the forward, so they capture long term features? Since they are in forward, aren't we removing all notions of long-term connectivity as they get cleared on every forward call?
Usually, you only want the LSTM to keep the memory during the sequence. For example, if I have a LSTM that recognizes activity in videos, then I want it to keep the memory while processing the frames in one video, but then I want it to forget it for the next video.
idk i am a beginner and i use jupyter notebooks and i copied the code perfectly( after running no errors) but i did not get any predictions or loss ? any idea what must be the case?
Don't you "destroy" some of the knowledge learned during training by initialising the hidden and cell state as zeros in each forward pass? Or is this a better approach than initialising the states once in the beginning? Maybe you could elaborate on that? :)
you explained nothing
I am trying to run this code on my gpu. It should work, but it doesn't. device = torch.device("cuda" if torch.cuda.is_available() else "cpu") returns 'cuda', so my GPU is being detected. I also copied the training and test inputs and targets to the gpu with .to(device) as well as the model (model = LSTMPredictor(hiddenstates).to(device)). But i still get the error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_mm). It occurs in the optimizer step (optimizer.step(closure)). What do you think?
Thanks for this video. Such a great help and cleared up some confusion.. One question I had was, for the training, why are you only using the values from y and not the x?
Why predict the sequence one by one? Can you treat each sequence as a feature and predict them at the same time?
LSTMs are recurrent networks: you need the result of the previous iteration to get the next.
That's the way they work, and also one of their main weakness.
When I run this I get this error:
'Tensor' object has no attribute 'append'
shouldn't it be h_t2 and c_t2 for self.lstm2?
why did we call the super () inside the LSTM predictor class ? is there any reason for this ?
Yes, we have to do this to initialize the super class correctly (this is a basic thing to do in object oriented programming in Python)
I tried to run this code and it's not working. Where can I find the file?
You need one of the latest Pytorch versions for this. The link to repo is in the video description
Can you do a tutorial on python data analysis and visualization covering numpy, pandas and matplotlib libraries.
where is the code
I understand the fact that your videos are "code along" style ones BUT for the implementation, there is too much from the HOW and saddly, nothing from the WHY.
hello I am a beginner python programmer .Can you please make video on DSA