КОМЕНТАРІ •

  • @ai4finance_byfintech
    @ai4finance_byfintech 3 роки тому +19

    FinRL: A Deep Reinforcement Learning Library for Quantitative Finance.

  • @snivesz32
    @snivesz32 3 роки тому +20

    I think if you are trying to produce a RL based solution, you should compare to a benchmark of a random decision agent. Then when they get approximately the same results you will realize that the model has no statistically significant edge. Another test would be to compare the RL agent on artificial random walk data instead of real data and see if there is a difference. If they both perform similarly well then you know the agent is not learning anything beyond memorizing the data it was fed. This is the age old bias-variance tradeoff problem at its root. Once you reduce the bias to a level that generalizes well, you no longer find any information gain.

  • @hannann6416
    @hannann6416 2 роки тому +2

    How well does your model generalize to new unseen data?

  • @kevinalejandro3121
    @kevinalejandro3121 3 роки тому +1

    If You have the binnacle of a consistent trader and You feed the reinforcement learning with that data , it can learn how to trade as the trader??

  • @AIstepbystep366
    @AIstepbystep366 2 роки тому +1

    Would it be possible to share the source code for this algorithm?

  • @redcabinstudios7248
    @redcabinstudios7248 4 роки тому +3

    Very good study. Appreciate it. I am testing algos in real small tradings, also interested to implement RL. If you want to share give a buzz.

    • @meltjl
      @meltjl 4 роки тому +8

      The code is available here if you are interested to explore
      github.com/meltjl/RL-Trading/blob/master/README.md

    • @rakhasaputra6985
      @rakhasaputra6985 4 роки тому +3

      Thank you @mel tjl

    • @MatloobKhushi
      @MatloobKhushi 4 роки тому

      @@meltjl Thanks Melissa.

  • @guregori_san
    @guregori_san 3 роки тому

    i checked the code on github, i didnt see any transformation on the data, do you normalize the data at some point, or do you use the raw data as it is outputed from the indicators directly?

    • @kadourkadouri3505
      @kadourkadouri3505 Рік тому

      you're probably referring to those dump tutorials where values are normalized (or standardized). It doesn't work that way. Those people are coming from computer engineering backgrounds. Those people are more likely, statistically speaking, python users. Therefore, if you want to gain some knowledge on quantitative finance I strongly suggest you to search for R tutorials in stead.

  • @NIKDEFAULT
    @NIKDEFAULT 3 роки тому

    I am a student of USYD, is it possible for me to get the files of this project?

    • @MatloobKhushi
      @MatloobKhushi 3 роки тому +1

      The code is available here github.com/meltjl/RL-Trading/blob/master/README.md

  • @monanica7331
    @monanica7331 3 роки тому +1

    BTC for $75K by end of this year& Control
    of The Currency is already Decentralised And now the China disruption would simply
    Decentralise the Mining setup for the better

  • @phongdang2874
    @phongdang2874 4 роки тому

    I think the non-indicator results are real. If you think about it, if yesterday’s price action is similar to today’s, then why should the AI be forced to make a decision? If yesterday was a red candle and today is a red candle; the AI would probably revert to a “how many red candles in a row” (heinkin ashi) strategy. In Heikin Ashi, if price changes direction 2 times in a month, then a trader is only going to make 2 trades. This explains why the non-indicator AI made minimal amount of decisions.

  • @ihebbibani7122
    @ihebbibani7122 3 роки тому +1

    How can you make a presentation of an OVERFITTED model . Worse than that , How does the professor let you even stand in front of people and make the presentation....Incredible....
    However ,
    Good to know that technical indicators changes the behaviour of the agent
    Not sure about the fact the your model (PPO2) performs better than DDPG as It's is overfitting . Actually , I'm sure it will be worse than DDPG as when integrating a commission fee , it already performs worse then DDPG.
    Hope you'll be more serious next time..

    • @juhanbae7231
      @juhanbae7231 3 роки тому

      Why would you say it is overfitted?

    • @meltjl
      @meltjl 3 роки тому +2

      @Iheb
      Perhaps in your haste to make a quick judgment, you have skipped through the presentation and have completely missed the point.
      The presentation shows the differences between with and without the use of Technical Indicators and how the latter reduced overfitting.
      - In 11:21s, there is a comparison between DDPG & PPO2 using the same test date range, in which, PPO2 shows a marginal improvement.
      - The DDPG study was performed under the assumption of zero commission. In 12.24s, the table compares the test result of PPO2 with Technical Indicators with various commission rates.
      The presentation is part of the course curriculum at the university to allow students to learn and present in front of their fellow classmates. You have the right to comment on the topic, but please exercise some respect and do not attack the professor and the presenter.

    • @ihebbibani7122
      @ihebbibani7122 3 роки тому

      @@meltjl
      -- If I do remember since the last time that I have seen this video , reinforcement learning performs better without technical in the training set but not wasn't good in the test set however without technical indicator it was a way more stable then the previous one in both train set and test which is according to me less profitable indeed but more stable thus BETTER.
      -- I know that your goal was to compare two models and if yours improved without caring about overfitting but you SHOULD for te sake of the presentation and for doing things professionally.
      --
      masquerade. Also ,you feel attacked because you know deep inside that you acted in a bad faith. That's it.