LSTM Neural Networks for Time Series Prediction - IoT Data Science Conference - Jakob Aungiers

Поділитися
Вставка
  • Опубліковано 6 жов 2024

КОМЕНТАРІ • 47

  • @chrisogonas
    @chrisogonas 5 років тому +5

    That was a pretty clear presentation, and the presenter did not adopt the know-all attitude. Superb!

  • @chaopan7205
    @chaopan7205 7 років тому +44

    That audience talked a bit too much.

    • @Insane_Kane
      @Insane_Kane 6 років тому +2

      Chao Pan questions are cool, anecdotal questabrags suck.

    • @zes7215
      @zes7215 6 років тому

      no such thing as too much , talk anyx and anyx can b perfx

  • @double_j3867
    @double_j3867 7 років тому +5

    Well done on LSTM explanation -- very thorough.

  • @trieunguyen336
    @trieunguyen336 5 років тому +5

    Useful, much more practical than Siraj Ravel's

    • @shivamkumar-qp1jm
      @shivamkumar-qp1jm 5 років тому

      I think Siraj video lectures is not for beginners if you have advanced knowledge of AI then it is good for you

  • @eliassobrefire
    @eliassobrefire 5 років тому +1

    Nice work and presentation!

  • @iloveno3
    @iloveno3 6 років тому

    I was very entertained. Thank you very much for sharing.

  • @G3ForceX
    @G3ForceX 7 років тому

    Thanks a lot for the code walkthrough, very helpful!

  • @GeronimoAlbornoz
    @GeronimoAlbornoz 3 роки тому +1

    Thank you jakob! Are there any advances that you see in the state of the art since this presentation? More than Four years of experience, pretty cool!! Good for you

    • @JakobAungiers
      @JakobAungiers  3 роки тому +2

      Absolutely, 4 years on and the state of the art models have evolved past LSTMs to attention base models, and recently compressive transformation models. That's not to say LSTMs aren't still useful ;)

  • @acmsong3899
    @acmsong3899 4 роки тому

    Cool. I have learned a lot from your code.thx.

  • @oldcrafty6720
    @oldcrafty6720 4 роки тому

    Great video!

  • @chrisjfox8715
    @chrisjfox8715 3 роки тому

    At 16:39, I am unclear on what is meant by "Shift sequence window to remove 0th element and push predicted value as nth element."
    Is this just saying that the window is slid forward 1 point and now includes the predicted value? Which I understand the concept of, but...
    I'm picturing, and the math works out such that, the 100 windows of size 50 each would go "perfectly" into the 5001 datapoints with no overlap. Pushing each size 50 window only 1 datapoint at a time would introduce overlap (intentionally so, I would believe) and result in far more than 100 windows. What am I missing?

  • @Bokgat
    @Bokgat 3 роки тому

    Question is, if 50 steps feed into 1 sample, and the result of the sample which is y predicted - y actual for observation 51, coming back to the noisy indian's first question, what is feed into the input neuron? Y - Y predict? If so, how are we then accounting for feature2 = X2 (eg opening price), feature3 = X3 (eg volume) etc? Really appreciate any answer here, this is still confusing to me. 2nd question, I get it that a batch is defined by the number of set's of say 50 observations (AKA samples), but how are these chained to each other through each epoch run. My understanding is that each batch is 1 complete neuro-net, so how are they linked up?

  • @Yustiks
    @Yustiks 6 років тому +2

    skydiving? you are awesome!

  • @pietroaminpuddu1850
    @pietroaminpuddu1850 4 роки тому

    Very disappointed with this code when I did understand that y_train is composed by a sequence of 30 days returns. Once you introduce a system to de-normalizate both y_test and y_predictions you find out, unfortunately, that like other codes, where data are not 'normalized' but scaled with some scaler, the predictions are shifted to the right as usual. Clearly you mention the use of the code for volatility predictions but as the prediction power is lagged ...

  • @rorycawley
    @rorycawley 6 років тому +3

    The training data is shuffled ( np.random.shuffle(train) ), would that affect the prediction since the order in which the values appear is vital for finding patterns?

    • @simonepozzoli
      @simonepozzoli 5 років тому +1

      Before he splitted the data into training sequences, then he shuffled that sequences. He's using LSTMs in a way such that at the end of each input sequence the internal state of the network is reset, so shuffling the sequences would not affect the result of the predictions. Shuffling the input data anyway is always advisable since it helps the learning algorithm to converge faster if using backprop with a small batch size.

    • @Imegirin
      @Imegirin 4 роки тому

      As a little supplement for Simone's comment: That would be indeed a problem if LSTM would be used not on fixed-length sequences, but on arbitrarily long ones. In this scenario (arbitrary length) you would feed measurements/samples at each timestep one by one, and you would have to set `stateful` parameter in `layers.LSTM` as True. With that config LSTM would keep internal state until you explicitly call `model.reset_states()`, which is advised to do after each epoch (for this specific scenario). Shuffling data before feeding this model indeed would lead to lost of all patterns in data.

  • @canalfishing4622
    @canalfishing4622 7 років тому

    This is great.

  • @chaoschao9432
    @chaoschao9432 7 років тому

    great one!

  • @xinnywillwin
    @xinnywillwin 5 років тому

    cool content!

  • @lighttheoryllc4337
    @lighttheoryllc4337 3 роки тому +1

    Turn off the Overhead Lights for goodness sakes

  • @sz8558
    @sz8558 4 роки тому

    Great presentation.....unfortunate that participants couldnt just shut up and take it all in.

  • @peterhenry1109
    @peterhenry1109 7 років тому

    Hi Jakob, Rather trying to predict the Sp500 close prices can it instead predict, whether the next close is going to be up or down ? This would limit output to either 1 or o , also it would be easier to evaluate whether the network is learning anything.

  • @maryammahmoudigharaie
    @maryammahmoudigharaie 4 роки тому

    Hi! Thank you so very much for the video! I have a question: in order to compute my prediction errors I need to de-normalize the values. I understand the formula to the de-normalization -as you mentioned in your website- but I don't know how the window size for the testing data works. I mean should I run the exact same code for the normalization with different formula on the prediction?

  • @harrywatts4194
    @harrywatts4194 7 років тому

    Thanks for uploading this, Jakob. Really clear explanation on LSTMs. I'm interested in adapting this code to accept multiple input dimensions from a CSV but am struggling with importing and normalising the vector. Do you have any advice on how to do this?

    • @JakobAungiers
      @JakobAungiers  7 років тому +2

      Thanks Harry, I try and elaborate on the import and normalising a bit more in my blog article: www.jakob-aungiers.com/articles/a/LSTM-Neural-Network-for-Time-Series-Prediction
      And if you have any more issues with the import bit just Google "importing csv into numpy" and you'll get lots of examples.

    • @harrywatts4194
      @harrywatts4194 7 років тому

      Perfect, thanks again!

    • @medadrufus
      @medadrufus 7 років тому

      Why don't you try using pandas for importing csv files. Its really simple. Just check out google for that.

  • @ImranKhan-fi2sm
    @ImranKhan-fi2sm 5 років тому

    Hii
    How to handle persistent model problem. While doing time series analysis i get the output which seems to be one time step ahead of the actual series. How to rectify this problem?? This thing i am getting with several ML, DL, and as well as with statistical algos. Please do reply??

  • @sandy15342
    @sandy15342 7 років тому +2

    Hi Jakob.. nice explanation.. I am a bit confused on prediction part.. Can you please tell me something more.. The LSTM model predicts on X_test which is passed in sp500.csv file.. and that suggests that the model which is being validated against the dataset is priorly known.. Now how to extend the predictions in the situation where we don't have data.. that is in future and we don't have any dataset..
    Or my understanding to the dataset is wrong. ??

    • @palashjhamnani186
      @palashjhamnani186 7 років тому +1

      Got it! He has mentioned it in the blog linked: www.jakob-aungiers.com/articles/a/LSTM-Neural-Network-for-Time-Series-Prediction
      If however we want to do real magic and predict many time steps ahead we only use the first window from the testing data as an initiation window. At each time step we then pop the oldest entry out of the rear of the window and append the prediction for the next time step to the front of the window, in essence shifting the window along so it slowly builds itself with predictions, until the window is full of only predicted values (in our case, as our window is of size 50 this would occur after 50 time steps). We then keep this up indefinitely, predicting the next time step on the predictions of the previous future time steps, to hopefully see an emerging trend.

  • @maisamwasti
    @maisamwasti 6 років тому

    Jakob, If I have one data point for every day of the year, do i need to have the window size of at-least 365 to capture the yearly seasonality?
    I think so, as you are shuffling all the windows too in the training set.

  • @franciscolinaje3745
    @franciscolinaje3745 6 років тому

    How yo can use the predictions to calculated expected returns

  • @omeryalcn5797
    @omeryalcn5797 6 років тому

    there is only thing remind me this video , that is necklace

  • @bingozhao8280
    @bingozhao8280 4 роки тому

    why no CC ?

  • @student3506
    @student3506 4 роки тому

    Hi Jacob, thanks a lot such a framework.
    But I am bit confuse.
    How to denormalize predicted value?
    for example:
    X y
    [1,2,3] [4]
    [2,3,4] [5]
    [3,4,5] [x]
    if above example is normalized by the given formula, how can we denormalize when we predict 4,5 and x?

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 5 років тому +1

    It’s hard to figure out exactly what the input data matrix is. The link to CSV file no longer works. Thanks

    • @stom4dongen
      @stom4dongen 5 років тому

      You can find it in the data folder on the github, worked for me :D

  • @PinkFloydTheDarkSide
    @PinkFloydTheDarkSide 7 років тому

    Does RNN/LSTM consider seasonality?

  • @GK-oj3cn
    @GK-oj3cn 7 років тому

    It seems to me that thus NN predicts the return to the mean value or stationary state of the series. So predictive power seems to ber very doubtful.

    • @JakobAungiers
      @JakobAungiers  7 років тому +3

      That's incorrect, the NN doesn't revert to the series mean values at all and is mapping higher-level non-linear relationships. However, there is an issue especially prevalent with a time series like stock prices which this particular NN does not deal with and that is the issue of time series non-stationarity. There is work being done to tackle that via bayesian nonparametrics within LSTM NNs, however that work is far outside the scope of this video/talk.