A friendly introduction to deep reinforcement learning, Q-networks and policy gradients

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ •

  • @MeetYourBook
    @MeetYourBook Рік тому +19

    Hands down, this explanation of reinforcement learning is like winning a dance-off against a robot-smooth, on point, and utterly unbeatable!

  • @maxave7448
    @maxave7448 4 місяці тому +4

    Absolutely awesome explanation! Ive been struggling to learn this concept because other tutorials were focusing on the wrong aspects of Q-Learning and didnt get the message accross. Yours, on the other hand, did an excellent job by starting with the interpretation of the Bellman Equation and giving an intuitive visual explanation! Wonderful tutorial

  • @jjhj_
    @jjhj_ 2 роки тому +12

    I've been bingewatching your "friendly intro to" series since yesterday and it has been amazing. I've worked with ML models as part of my studies and my work over the past two years, but even so, you've enriched my conceptual understanding by so much more than any of my professors could. Really appreciate your clever visualizations of what's going on "under the hood" of the ML/DL algo's. Great videos, awesome teacher!

  • @-xx-7674
    @-xx-7674 7 місяців тому +1

    This is probably the most friendliest video and still covering all important concepts of RL, thank you

  • @reyhanehhashempour8522
    @reyhanehhashempour8522 3 роки тому +15

    Fantastic as always! Whenever I want to learn a new concept in AI, I always start with Luis's video(s) on that. Thank you so much, Luis!

  • @ShusmitaDasGupta
    @ShusmitaDasGupta 4 місяці тому +1

    Your teaching styles and process is so good. I didn't get distracted to the whole video. Thank you sir. For teaching such valuable things in such a way.

  • @achyuthvishwamithra
    @achyuthvishwamithra 3 роки тому +5

    I feel super fortunate to have come across your channel. You are doing an incredible job! Just incredible!

  • @srinivasanbalan5903
    @srinivasanbalan5903 3 роки тому +2

    One of the best videos on RL algorithms. Kudos to Dr. Serrano.

  • @manav9686
    @manav9686 5 місяців тому +1

    Just subscribed to this channel after watching this video. Wonderful explanations combined with excellent visuals. Had difficulty in understanding RL, your video made me understand it better. Thank you.

  • @riddhimanmoulick3407
    @riddhimanmoulick3407 Рік тому +2

    Thanks for such a great video! Your visual descriptions combined with your explanations really presented a wonderful conceptual understanding of Deep-RL fundamentals.

  • @renjithbaby
    @renjithbaby 3 роки тому +3

    This is the simplest explanation I have seen on RL! 😍

  • @nishanthplays195
    @nishanthplays195 2 роки тому +2

    No words sir! Finally found another great yt channel ✨

  • @pandharpurkar_
    @pandharpurkar_ 3 роки тому +8

    Luis is master man of explaining complex things easily..!! thank you luis for such a great efforts

  • @therockomanz
    @therockomanz 2 роки тому

    I'd like to thank the creators for this video. This is the best video to learn the basics of RL. Helped a lot in my learning path.

  • @EshwarNorthEast
    @EshwarNorthEast 3 роки тому +7

    The wait ends! Thank you sir!

  • @debobabai
    @debobabai 2 роки тому

    Excellent explanation. I dont know why this video has so low views. It deserves Billion views.

  • @lebohangmbele283
    @lebohangmbele283 3 роки тому

    Wow. I can show this to my pre-school nephew and at the end of the video they will understand what RL is all about. Thanks.

  • @geletamekonnen2323
    @geletamekonnen2323 3 роки тому +1

    I Can't pass Without appreciating this great great Lecture. Thanks Luis serrano. 😍

  • @randomdotint4285
    @randomdotint4285 3 роки тому

    Oh my god. This was god level teaching. How I envy your real world students.

  • @BritskNguyen
    @BritskNguyen 4 місяці тому

    This is a perfect introduction. It goes from the specific then the general.

  • @alsahlawi19
    @alsahlawi19 Рік тому

    This by far the best video explaining DRL, many thanks!

  • @pauledam2174
    @pauledam2174 4 місяці тому

    Wonderful explanation. I think it by far the best I have seen!

  • @shreyashnadage3459
    @shreyashnadage3459 3 роки тому +1

    Finally here it is....been waiting for this for ages! Thanks Luis! Regards from India

  • @yo-sato
    @yo-sato 2 роки тому

    EXcellent tutorial. I have recommended this tutorial to my students.

  • @piyaamarapalamure5927
    @piyaamarapalamure5927 Рік тому

    This is the best tutorial so far for the Q learning .. Thank you so much 😍😍

  • @mariogalindoq
    @mariogalindoq 3 роки тому +2

    Luis: congratulations! Again a very good video, very well explained and with a beautiful presentation. Thank you.

  • @wfth1696
    @wfth1696 2 роки тому

    One of the clearest explanations of the topic that I saw. Excellent!

  • @code_with_om
    @code_with_om 2 роки тому

    After a day of searching I found a great explanation 😀😀 thank you so much

  • @william_8844
    @william_8844 Рік тому

    WTF!!!
    Like I am half way through and I am already blown by the way you explain content. This has been the best video so far explaining RF..... Wow. New sub❤❤😅

  • @beltusnkwawir2908
    @beltusnkwawir2908 2 роки тому

    I love the analogy of the discount factor with the dollar depreciation

  • @miguelramos3424
    @miguelramos3424 2 роки тому

    it's the best video that I've seen about this topic, thanks.

  • @NoNTr1v1aL
    @NoNTr1v1aL 3 роки тому +1

    Absolutely amazing video! You are my saviour!

  • @kr8432
    @kr8432 11 місяців тому +1

    I am not stupid but AI still does not come easy to me. Sometimes I wonder, besides having more slots in the working memory, how a genius or simply more intelligent people think about this subject so that it comes more naturally to them. I feel like this video was a very good insight on how easy such a complicated topic can appear, if you just have a very good intuitive understanding for abstract concepts. Very nicely done!

  • @jeromeeusebius
    @jeromeeusebius 3 роки тому +2

    Luis, great video. Thanks for putting this together explaining the most important concepts and terms in Reinforcement Learning.

  • @ብርቱሰው
    @ብርቱሰው 6 місяців тому

    Thank you for the wonderful video. Please add more practical example videos for the application of reinforcement learning.

    • @SerranoAcademy
      @SerranoAcademy  6 місяців тому

      Thank you! Definitely! Here's a playlist of applications of RL to training large language models. ua-cam.com/play/PLs8w1Cdi-zvYviYYw_V3qe6SINReGF5M-.html

  • @overgeared
    @overgeared 3 роки тому

    excellente como siempre! thank you from an MSc AI student working on DQNs.

  • @charanbirdi
    @charanbirdi Рік тому

    Absolutely brilliant, specially Nural network and loss function explanation

  • @karlbooklover
    @karlbooklover Рік тому

    best explanation I've seen so far

  • @LuisGonzalez-jx2qy
    @LuisGonzalez-jx2qy 3 роки тому +3

    Amazing work fellow Luis! Looking forward to more of your videos

  • @AyaAya-fh2wx
    @AyaAya-fh2wx 2 роки тому

    Thanks!

    • @SerranoAcademy
      @SerranoAcademy  2 роки тому

      Thank you so much for your contribution Aynur! And I'm so glad you like the video! :)

  • @mutemoonshiner
    @mutemoonshiner Рік тому

    Huge thanks , for a nice and lucid content.
    specially for how to train the network, loss function and how to create datasets.

  • @faisaldj
    @faisaldj 3 роки тому

    I wish I had atleast my bachelors Math teacher like you but I would like to be like you for my students.

  • @colabpro2615
    @colabpro2615 3 роки тому +2

    you're one of the best teachers I have ever come across!

  • @alexvass
    @alexvass Рік тому

    Thanks

    • @SerranoAcademy
      @SerranoAcademy  Рік тому

      Wow, thank you so much for your kindness and generosity, @alexvass!

  • @jaivratsingh9966
    @jaivratsingh9966 3 роки тому

    @Luis Serrano - thanks for this. Excellent!
    At 30:15 shouldnt (4,0) be -2 and hence (4,1) be -3 and so on.
    A Query on policy train: If you freeze video at 28:52, and look at the table. I see it as random walk where you end up to a reward location, and kind of infer the value (subtracting 1) from next value point and come up with 3,2,..-1. Why would you say the one should decrease
    p(->) for 0,0 ?
    At 0,0 (or any chosen node on simulated path), the moves always increase the value (better value state), then change should never be "decrease"). Also while training the net you dont use "Change". Then why are we discussing "Change" at all ?
    Shouldn't it be simply the probability of actual step each step to be higher than rest as it points to path leading to a reward?

  • @pedramhashemi5019
    @pedramhashemi5019 7 місяців тому

    A great introduction! thank you sincerely for this great gem!

  • @pellythirteen5654
    @pellythirteen5654 3 роки тому +9

    Fantastic ! Having watched many teachings on this subject , your explanation really made things clear.
    Now my fingers are itching to try it out and write some Delphi code. I will start with your grid-world first , but if that works I want to write a chess-engine. I have already written a chess-program using the alfa-beta algoritme and it will be fun to compare it with a neural-network based.

  • @infinitamo
    @infinitamo 2 роки тому

    You are a God-send. Thank you so much

  • @AlexisKM100
    @AlexisKM100 Рік тому

    God damn it, this explanation was just straightforward, I loved it, it helped me to clarify many doubts I had, thanks :D
    Just how every explanation should be, concise and with practical examples.

  • @eeerrrrzzz
    @eeerrrrzzz 3 роки тому

    This video is a gem. Thank you.

  • @JessicaGiardina1996
    @JessicaGiardina1996 Місяць тому

    So comprehensive! Thank you!

  • @SetoFPV
    @SetoFPV 6 місяців тому

    very good video, its very clear what is deep reinforcement learning from the bottom

  • @alexandermedina4950
    @alexandermedina4950 2 роки тому

    Great starting point for RL! Thank you.

  • @saphirvolvianemfogo1717
    @saphirvolvianemfogo1717 2 роки тому

    Amazing explanation. Thank you, it gives me a good starting point on DRL

  • @Andy-rq6rq
    @Andy-rq6rq 2 роки тому

    Amazing explanation! I was left confused after the MIT RL lecture but it finally made sense after watching this

  • @kafaayari
    @kafaayari Рік тому

    Great lecture Mr. Serrano, thx. But some parts are inconsistent and confusing. For example at 29:49, for the state (3,1) the best action is to move left and agent went left. However you try to decrease its probability during training as seen in the table. That doesn't make sense.

  • @seraphiusNoctis
    @seraphiusNoctis 2 роки тому +2

    Loved the video, quick question on the policy network section, because something still seems a little “disjointed” in the sense that the roles for both networks do not seem to be clear - I might be missing something…
    I don’t understand why we would use a decreasing/recursive “gain” function instead of just using the value network for the purpose of establishing values for the policy. Instead, doesn’t the value network already build in feedback mechanism that would be well suited to this?

  • @lucianoinso
    @lucianoinso Рік тому

    Truly great video and explanation! Loved that you went deep (haha) into the details of the neural network, thanks!

  • @ahmedoreby2856
    @ahmedoreby2856 2 роки тому

    very good video with excellent elaboration for the equation thanks you very much for this

  • @AlexandriaLibraryGame
    @AlexandriaLibraryGame 2 роки тому +1

    I don't understand how to train the NN at 34:09, what are the features and what is the label?

  • @li-pingho1441
    @li-pingho1441 Рік тому

    this the best rl tutorial on internet

  • @RobertLugg
    @RobertLugg Рік тому +2

    You are one of the best teachers around. Thank you. What if the grid is different or the end goals change location? Do you need to start training over?

    • @SerranoAcademy
      @SerranoAcademy  Рік тому +1

      Thank you! Great question, If the environment changes in general you do have to start again. However, there may be cases in which you can piggy back from having learned the game in a simpler situation, so it depends.

  • @shreyasdhotrad1097
    @shreyasdhotrad1097 3 роки тому +1

    Very intuitive as always.
    Expecting some more intuitions on semi supervised learning,energy models.
    Thank you so much sir!!🙏

  • @CrusadeVoyager
    @CrusadeVoyager 3 роки тому +1

    Nice vid with gr8 explanation on RL.

  • @juanodonnell
    @juanodonnell 2 роки тому

    at 34:17 the shown result says that at point (3,1) with gain -3 and direction Left should be penalized and decrease the weight for that action but actually that is the most efficient movement at such point. How can we reconcile that?

  • @francescserratosa3284
    @francescserratosa3284 3 роки тому +1

    Excellent video. Thank's a lot!!

  • @paedrufernando2351
    @paedrufernando2351 3 роки тому +1

    cool...it took so long to drop this vid..I was earlier expecting RL videos from your site..but then I turned to Prof Oliver Siguad and completed RL there..Now I understand how DDPG works and internals of it..But I defintiley would want to see your take and perspective on this topic..So here I go again to watch this Video on RL ....

  • @banaiviktor6634
    @banaiviktor6634 3 роки тому

    Yes agree, no clear explanation on this topic apart from this video , thanks a lot, it is awesome ! :)

  • @RealNikolaus
    @RealNikolaus 2 роки тому

    Incredible video, I love the animations!

  • @zamin_graphy
    @zamin_graphy 2 роки тому

    Fantastic explanation.

  • @ZirTaaah
    @ZirTaaah 2 роки тому

    best vids on the subject for suuuuuuuure im mad that i didnt see it earlier nice broo

  • @siddiqkawser2153
    @siddiqkawser2153 5 місяців тому

    U rock dude! U just earned a new subscriber

  • @mustafazuhair2830
    @mustafazuhair2830 3 роки тому

    You have made my day, thank you!

  • @elimelechschreiber6937
    @elimelechschreiber6937 2 роки тому +1

    Thank you.
    Question: In the last section you use the term 'gain' but actually use the 'value' function i believe. Shouldn't the gain be the the difference of the value (in your example, always positive 1 then)? The gained value associated with the given action?

  • @bjornnorenjobb
    @bjornnorenjobb 3 роки тому

    wow, extremely good video my friend! Big thanks!

  • @flwi
    @flwi 2 роки тому

    Wow - that was a very understandable explanation! Well done!

  • @fabianrestrepo82
    @fabianrestrepo82 10 місяців тому

    Hello. In 22:40, after having used an initial random value of 0.2 for the state with coordinates 2,3, how did you find the value of the neighboring states (4.9, 3.2, 1.3, -2.7) the first time? was this also random ?

    • @SerranoAcademy
      @SerranoAcademy  9 місяців тому

      Great question! Yes these numbers I picked randomly. The point is that these may be values that a large neural network would output. And I tried to make them really wrong, so that we see that the neural network is not well trained, and we need to get a loss function that notices this.

  • @ottodgs4031
    @ottodgs4031 11 місяців тому

    Very nice video! When you say that the label of the new dataset is a "big increase" or a "small decrease", what is that in practice? Just the gain?

  • @AyaAya-fh2wx
    @AyaAya-fh2wx 2 роки тому

    You are a genius!! Thank you!

  • @bluedade2100
    @bluedade2100 Рік тому

    Hi, I am having a hard time understanding how in 29:49 we have change as decrease for the bottom 3 ones. For example 7th row with gain -2, it has as the change decrease but we actually increased by 1. Could someone elaborate on this?

  • @TheLionSaidMeow
    @TheLionSaidMeow 3 роки тому +1

    I have a question in training the Policy Neural Networks.
    Why are we calling it a gain? And if it is a gain, it's a gain relative to which reference position?
    Because if you take a sufficiently long path, the starting position will keep getting large negative values and that position might become an undesirable position to go to.
    But if that position lies on the optimal path to the best possible outcome, that could be a problem because of negative value associated with it as a result of the training.
    Example, if a path from (5,4) goes left till it hits the boundary, goes up and goes right till hits the +5, the (5,4) location gets a -6 gain, which would make (5,4) undesirable even though it's 1 step away from +5.
    Are we saying that the probability of such a path occurring will be disproportionately compensated by the probability of more conducive paths occuring that weight location (5,4) much more positively?
    Also, this man probably has the most clear understanding of these concepts to the extent that he can explain them so clearly and lucidly. Hands down, the best explanation for beginners and intermediates. Excellent work! 👏🏽👏🏽

    • @bonob0123
      @bonob0123 9 місяців тому

      -6 would be the gain of the policy move ie moving left from (5,4) which is indeed an undesirable move. it is not a value of the cell, but of the direction of movement.

  • @rohitchan007
    @rohitchan007 3 роки тому

    This is by far the best explanation.

  • @Shaunmcdonogh-shaunsurfing
    @Shaunmcdonogh-shaunsurfing 2 роки тому

    Excellent video! Hoping for more on RL.

  • @zeio-nara
    @zeio-nara 2 роки тому

    An excellent explanation, thank you

  • @emanuelfratrik1251
    @emanuelfratrik1251 2 роки тому

    Excellent explanation! Thank you!

  • @diwakerkumar5910
    @diwakerkumar5910 Рік тому +1

    Thanks 🙏

  • @studgaming6160
    @studgaming6160 2 роки тому

    Finally good video on RL

  • @AI_ML_DL_LLM
    @AI_ML_DL_LLM 3 роки тому +1

    Great video, a question: if i go for value network, do i still need the policy network too or vice versa? because by having only one of them, i can get to my target? thanks in advanced

  • @roshanid6523
    @roshanid6523 3 роки тому +1

    Thanks for sharing

  • @msantami
    @msantami 3 роки тому

    Thanks, great video. Bought the book!

    • @SerranoAcademy
      @SerranoAcademy  3 роки тому

      Great to hear, thank you! I hope you like it!

  • @leonsaurabh21
    @leonsaurabh21 6 місяців тому

    Great explanation

  • @nothing21797
    @nothing21797 Рік тому +1

    Wunderbar!!!

  • @sricinu
    @sricinu 3 роки тому

    Excellent explaination

  • @ahmarhussain8720
    @ahmarhussain8720 2 роки тому

    amazing explanation

  • @antonioriz
    @antonioriz 3 роки тому

    This is simply GREAT! I would love to follow more video on the issue of Reinforcement Learning. By the way I'm really enjoying your book Grokking Machine Learning, but I would like to know more on RL

  • @paul-andrejacques2488
    @paul-andrejacques2488 3 роки тому

    Just Fantastic. Thank you

  • @prakashselvakumar5867
    @prakashselvakumar5867 3 роки тому

    Very well explained! Thank you

  • @wilmarariza9020
    @wilmarariza9020 3 роки тому

    Excellent! Luis.

  • @ahmedshamz
    @ahmedshamz 8 місяців тому

    Thanks for these videos Luis. Are these from a course?

  • @bostonlife8589
    @bostonlife8589 2 роки тому

    Fantastic explanation!