A friendly introduction to deep reinforcement learning, Q-networks and policy gradients

Поділитися
Вставка
  • Опубліковано 20 тра 2024
  • A video about reinforcement learning, Q-networks, and policy gradients, explained in a friendly tone with examples and figures.
    Introduction to neural networks: • A friendly introductio...
    Introduction: (0:00)
    Markov decision processes (MDP): (1:09)
    Rewards: (5:39)
    Discount factor: (8:51)
    Bellman equation: (10:48)
    Solving the Bellman equation: (12:43)
    Deterministic vs stochastic processes: (16:29)
    Neural networks: (19:15)
    Value neural networks: (21:44)
    Policy neural networks: (25:44)
    Training the policy neural network: (30:46)
    Conclusion: (34:53)
    Announcement: Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML
    40% discount code: serranoyt
  • Наука та технологія

КОМЕНТАРІ • 145

  • @-xx-7674
    @-xx-7674 6 днів тому

    This is probably the most friendliest video and still covering all important concepts of RL, thank you

  • @zentootv4687
    @zentootv4687 5 місяців тому +11

    Hands down, this explanation of reinforcement learning is like winning a dance-off against a robot-smooth, on point, and utterly unbeatable!

  • @reyhanehhashempour8522
    @reyhanehhashempour8522 2 роки тому +13

    Fantastic as always! Whenever I want to learn a new concept in AI, I always start with Luis's video(s) on that. Thank you so much, Luis!

  • @achyuthvishwamithra
    @achyuthvishwamithra 2 роки тому +4

    I feel super fortunate to have come across your channel. You are doing an incredible job! Just incredible!

  • @srinivasanbalan5903
    @srinivasanbalan5903 3 роки тому +1

    One of the best videos on RL algorithms. Kudos to Dr. Serrano.

  • @jeromeeusebius
    @jeromeeusebius 2 роки тому +2

    Luis, great video. Thanks for putting this together explaining the most important concepts and terms in Reinforcement Learning.

  • @wfth1696
    @wfth1696 Рік тому

    One of the clearest explanations of the topic that I saw. Excellent!

  • @colabpro2615
    @colabpro2615 3 роки тому +1

    you're one of the best teachers I have ever come across!

  • @riddhimanmoulick3407
    @riddhimanmoulick3407 7 місяців тому +2

    Thanks for such a great video! Your visual descriptions combined with your explanations really presented a wonderful conceptual understanding of Deep-RL fundamentals.

  • @LuisGonzalez-jx2qy
    @LuisGonzalez-jx2qy 3 роки тому +3

    Amazing work fellow Luis! Looking forward to more of your videos

  • @EshwarNorthEast
    @EshwarNorthEast 3 роки тому +7

    The wait ends! Thank you sir!

  • @renjithbaby
    @renjithbaby 3 роки тому +3

    This is the simplest explanation I have seen on RL! 😍

  • @shreyasdhotrad1097
    @shreyasdhotrad1097 3 роки тому +1

    Very intuitive as always.
    Expecting some more intuitions on semi supervised learning,energy models.
    Thank you so much sir!!🙏

  • @therockomanz
    @therockomanz 2 роки тому

    I'd like to thank the creators for this video. This is the best video to learn the basics of RL. Helped a lot in my learning path.

  • @alsahlawi19
    @alsahlawi19 Рік тому

    This by far the best video explaining DRL, many thanks!

  • @jjhj_
    @jjhj_ Рік тому +10

    I've been bingewatching your "friendly intro to" series since yesterday and it has been amazing. I've worked with ML models as part of my studies and my work over the past two years, but even so, you've enriched my conceptual understanding by so much more than any of my professors could. Really appreciate your clever visualizations of what's going on "under the hood" of the ML/DL algo's. Great videos, awesome teacher!

  • @Andy-rq6rq
    @Andy-rq6rq 2 роки тому

    Amazing explanation! I was left confused after the MIT RL lecture but it finally made sense after watching this

  • @mariogalindoq
    @mariogalindoq 3 роки тому +2

    Luis: congratulations! Again a very good video, very well explained and with a beautiful presentation. Thank you.

  • @geletamekonnen2323
    @geletamekonnen2323 2 роки тому +1

    I Can't pass Without appreciating this great great Lecture. Thanks Luis serrano. 😍

  • @emanuelfratrik1251
    @emanuelfratrik1251 2 роки тому

    Excellent explanation! Thank you!

  • @NoNTr1v1aL
    @NoNTr1v1aL 2 роки тому +1

    Absolutely amazing video! You are my saviour!

  • @saphirvolvianemfogo1717
    @saphirvolvianemfogo1717 2 роки тому

    Amazing explanation. Thank you, it gives me a good starting point on DRL

  • @eeerrrrzzz
    @eeerrrrzzz 2 роки тому

    This video is a gem. Thank you.

  • @pellythirteen5654
    @pellythirteen5654 2 роки тому +9

    Fantastic ! Having watched many teachings on this subject , your explanation really made things clear.
    Now my fingers are itching to try it out and write some Delphi code. I will start with your grid-world first , but if that works I want to write a chess-engine. I have already written a chess-program using the alfa-beta algoritme and it will be fun to compare it with a neural-network based.

  • @miguelramos3424
    @miguelramos3424 Рік тому

    it's the best video that I've seen about this topic, thanks.

  • @alexandermedina4950
    @alexandermedina4950 Рік тому

    Great starting point for RL! Thank you.

  • @prakashselvakumar5867
    @prakashselvakumar5867 2 роки тому

    Very well explained! Thank you

  • @TheOnlyAndreySotnikov
    @TheOnlyAndreySotnikov 9 місяців тому

    Great video!

  • @nishanthplays195
    @nishanthplays195 2 роки тому +2

    No words sir! Finally found another great yt channel ✨

  • @overgeared
    @overgeared 2 роки тому

    excellente como siempre! thank you from an MSc AI student working on DQNs.

  • @charanbirdi
    @charanbirdi Рік тому

    Absolutely brilliant, specially Nural network and loss function explanation

  • @flwi
    @flwi Рік тому

    Wow - that was a very understandable explanation! Well done!

  • @code_with_om
    @code_with_om Рік тому

    After a day of searching I found a great explanation 😀😀 thank you so much

  • @piyaamarapalamure5927
    @piyaamarapalamure5927 10 місяців тому

    This is the best tutorial so far for the Q learning .. Thank you so much 😍😍

  • @karlbooklover
    @karlbooklover Рік тому

    best explanation I've seen so far

  • @shreyashnadage3459
    @shreyashnadage3459 3 роки тому +1

    Finally here it is....been waiting for this for ages! Thanks Luis! Regards from India

  • @yo-sato
    @yo-sato Рік тому

    EXcellent tutorial. I have recommended this tutorial to my students.

  • @mutemoonshiner
    @mutemoonshiner Рік тому

    Huge thanks , for a nice and lucid content.
    specially for how to train the network, loss function and how to create datasets.

  • @francescserratosa3284
    @francescserratosa3284 2 роки тому +1

    Excellent video. Thank's a lot!!

  • @zeio-nara
    @zeio-nara 2 роки тому

    An excellent explanation, thank you

  • @lebohangmbele283
    @lebohangmbele283 2 роки тому

    Wow. I can show this to my pre-school nephew and at the end of the video they will understand what RL is all about. Thanks.

  • @mustafazuhair2830
    @mustafazuhair2830 2 роки тому

    You have made my day, thank you!

  • @adrianfiedler3520
    @adrianfiedler3520 2 роки тому

    Incredible video, I love the animations!

  • @pandharpurkar_
    @pandharpurkar_ 3 роки тому +7

    Luis is master man of explaining complex things easily..!! thank you luis for such a great efforts

  • @zamin_graphy
    @zamin_graphy Рік тому

    Fantastic explanation.

  • @scooby95219
    @scooby95219 2 роки тому

    great explanation. thank you!

  • @beltusnkwawir2908
    @beltusnkwawir2908 2 роки тому

    I love the analogy of the discount factor with the dollar depreciation

  • @CrusadeVoyager
    @CrusadeVoyager 3 роки тому +1

    Nice vid with gr8 explanation on RL.

  • @infinitamo
    @infinitamo 2 роки тому

    You are a God-send. Thank you so much

  • @ahmedoreby2856
    @ahmedoreby2856 2 роки тому

    very good video with excellent elaboration for the equation thanks you very much for this

  • @DrMukeshBangar
    @DrMukeshBangar 2 роки тому

    great video. easy explanation! thank you.

  • @kr8432
    @kr8432 3 місяці тому +1

    I am not stupid but AI still does not come easy to me. Sometimes I wonder, besides having more slots in the working memory, how a genius or simply more intelligent people think about this subject so that it comes more naturally to them. I feel like this video was a very good insight on how easy such a complicated topic can appear, if you just have a very good intuitive understanding for abstract concepts. Very nicely done!

  • @AyaAya-fh2wx
    @AyaAya-fh2wx Рік тому

    You are a genius!! Thank you!

  • @KathySierraVideo
    @KathySierraVideo 2 роки тому

    Thank-you for this 🙏

  • @Shaunmcdonogh-shaunsurfing
    @Shaunmcdonogh-shaunsurfing 2 роки тому

    Excellent video! Hoping for more on RL.

  • @ahmarhussain8720
    @ahmarhussain8720 Рік тому

    amazing explanation

  • @bostonlife8589
    @bostonlife8589 2 роки тому

    Fantastic explanation!

  • @randomdotint4285
    @randomdotint4285 2 роки тому

    Oh my god. This was god level teaching. How I envy your real world students.

  • @msantami
    @msantami 3 роки тому

    Thanks, great video. Bought the book!

    • @SerranoAcademy
      @SerranoAcademy  3 роки тому

      Great to hear, thank you! I hope you like it!

  • @sricinu
    @sricinu 2 роки тому

    Excellent explaination

  • @paul-andrejacques2488
    @paul-andrejacques2488 2 роки тому

    Just Fantastic. Thank you

  • @svein2330
    @svein2330 3 роки тому

    This video is brilliant!

  • @sergeipetrov5572
    @sergeipetrov5572 3 роки тому

    Thank you so much! Very useful!

  • @william_8844
    @william_8844 10 місяців тому

    WTF!!!
    Like I am half way through and I am already blown by the way you explain content. This has been the best video so far explaining RF..... Wow. New sub❤❤😅

  • @lucianoinso
    @lucianoinso 6 місяців тому

    Truly great video and explanation! Loved that you went deep (haha) into the details of the neural network, thanks!

    • @SerranoAcademy
      @SerranoAcademy  6 місяців тому

      Thanks! Lol, I see what you did there! :D

  • @AlexisKM100
    @AlexisKM100 6 місяців тому

    God damn it, this explanation was just straightforward, I loved it, it helped me to clarify many doubts I had, thanks :D
    Just how every explanation should be, concise and with practical examples.

  • @debobabai
    @debobabai 2 роки тому

    Excellent explanation. I dont know why this video has so low views. It deserves Billion views.

  • @roshanid6523
    @roshanid6523 2 роки тому +1

    Thanks for sharing

  • @bjornnorenjobb
    @bjornnorenjobb 2 роки тому

    wow, extremely good video my friend! Big thanks!

  • @ishwargowda
    @ishwargowda 2 роки тому

    This is perfect!!!

  • @ZirTaaah
    @ZirTaaah Рік тому

    best vids on the subject for suuuuuuuure im mad that i didnt see it earlier nice broo

  • @antonioriz
    @antonioriz 2 роки тому

    This is simply GREAT! I would love to follow more video on the issue of Reinforcement Learning. By the way I'm really enjoying your book Grokking Machine Learning, but I would like to know more on RL

  • @faisaldj
    @faisaldj 2 роки тому

    I wish I had atleast my bachelors Math teacher like you but I would like to be like you for my students.

  • @banaiviktor6634
    @banaiviktor6634 2 роки тому

    Yes agree, no clear explanation on this topic apart from this video , thanks a lot, it is awesome ! :)

  • @Lukas-zl5zs
    @Lukas-zl5zs 2 роки тому

    amazing video, good work!

  • @aliza207
    @aliza207 3 роки тому

    in love with your videos😍

  • @li-pingho1441
    @li-pingho1441 7 місяців тому

    this the best rl tutorial on internet

  • @diwakerkumar5910
    @diwakerkumar5910 10 місяців тому +1

    Thanks 🙏

  • @paedrufernando2351
    @paedrufernando2351 3 роки тому +1

    cool...it took so long to drop this vid..I was earlier expecting RL videos from your site..but then I turned to Prof Oliver Siguad and completed RL there..Now I understand how DDPG works and internals of it..But I defintiley would want to see your take and perspective on this topic..So here I go again to watch this Video on RL ....

  • @mutzelmann
    @mutzelmann 2 роки тому

    great job!!!

  • @wilmarariza9020
    @wilmarariza9020 3 роки тому

    Excellent! Luis.

  • @joselee5377
    @joselee5377 5 місяців тому

    i fucking move this video. oh my goodness... the level of satisfaction of understanding something that i struggled to grasp ;)

  • @nothing21797
    @nothing21797 Рік тому +1

    Wunderbar!!!

  • @honghaiz
    @honghaiz 5 місяців тому

    Nice presentation

  • @Alpacastan21m
    @Alpacastan21m 2 роки тому

    Amazing.

  • @seraphiusNoctis
    @seraphiusNoctis 2 роки тому +2

    Loved the video, quick question on the policy network section, because something still seems a little “disjointed” in the sense that the roles for both networks do not seem to be clear - I might be missing something…
    I don’t understand why we would use a decreasing/recursive “gain” function instead of just using the value network for the purpose of establishing values for the policy. Instead, doesn’t the value network already build in feedback mechanism that would be well suited to this?

  • @teetanrobotics5363
    @teetanrobotics5363 3 роки тому

    Amazing. Could you please make a course on RL and Deep RL?

  • @baronvonbeandip
    @baronvonbeandip 5 місяців тому

    The title reminds me of how I got interested in learning Japanese: Namasensei would put out videos where he would get drunk and yell at you about a donkey saying 「あいうえお」and calling me a b*tch.
    That's when I knew the Japanese language community was my home.

  • @AI_Financier
    @AI_Financier 2 роки тому +1

    Great video, a question: if i go for value network, do i still need the policy network too or vice versa? because by having only one of them, i can get to my target? thanks in advanced

  • @RobertLugg
    @RobertLugg Рік тому +2

    You are one of the best teachers around. Thank you. What if the grid is different or the end goals change location? Do you need to start training over?

    • @SerranoAcademy
      @SerranoAcademy  Рік тому +1

      Thank you! Great question, If the environment changes in general you do have to start again. However, there may be cases in which you can piggy back from having learned the game in a simpler situation, so it depends.

  • @maqsoodshah434
    @maqsoodshah434 9 місяців тому

    Amazing, It would be great if someone could code the example explained, would be a huge help for beginners where relating to this video the concepts could be further strengthened ..

  • @rohitchan007
    @rohitchan007 2 роки тому

    This is by far the best explanation.

  • @studgaming6160
    @studgaming6160 Рік тому

    Finally good video on RL

  • @ahmedshamz
    @ahmedshamz Місяць тому

    Thanks for these videos Luis. Are these from a course?

  • @AyaAya-fh2wx
    @AyaAya-fh2wx Рік тому

    Thanks!

    • @SerranoAcademy
      @SerranoAcademy  Рік тому

      Thank you so much for your contribution Aynur! And I'm so glad you like the video! :)

  • @outtaspacetime
    @outtaspacetime Рік тому

    1234's vote up! thanks for this great overview

  • @elimelechschreiber6937
    @elimelechschreiber6937 2 роки тому +1

    Thank you.
    Question: In the last section you use the term 'gain' but actually use the 'value' function i believe. Shouldn't the gain be the the difference of the value (in your example, always positive 1 then)? The gained value associated with the given action?

  • @ottodgs4031
    @ottodgs4031 4 місяці тому

    Very nice video! When you say that the label of the new dataset is a "big increase" or a "small decrease", what is that in practice? Just the gain?

  • @jaivratsingh9966
    @jaivratsingh9966 2 роки тому

    @Luis Serrano - thanks for this. Excellent!
    At 30:15 shouldnt (4,0) be -2 and hence (4,1) be -3 and so on.
    A Query on policy train: If you freeze video at 28:52, and look at the table. I see it as random walk where you end up to a reward location, and kind of infer the value (subtracting 1) from next value point and come up with 3,2,..-1. Why would you say the one should decrease
    p(->) for 0,0 ?
    At 0,0 (or any chosen node on simulated path), the moves always increase the value (better value state), then change should never be "decrease"). Also while training the net you dont use "Change". Then why are we discussing "Change" at all ?
    Shouldn't it be simply the probability of actual step each step to be higher than rest as it points to path leading to a reward?

  • @kafaayari
    @kafaayari Рік тому

    Great lecture Mr. Serrano, thx. But some parts are inconsistent and confusing. For example at 29:49, for the state (3,1) the best action is to move left and agent went left. However you try to decrease its probability during training as seen in the table. That doesn't make sense.