Absolutely awesome explanation! Ive been struggling to learn this concept because other tutorials were focusing on the wrong aspects of Q-Learning and didnt get the message accross. Yours, on the other hand, did an excellent job by starting with the interpretation of the Bellman Equation and giving an intuitive visual explanation! Wonderful tutorial
I've been bingewatching your "friendly intro to" series since yesterday and it has been amazing. I've worked with ML models as part of my studies and my work over the past two years, but even so, you've enriched my conceptual understanding by so much more than any of my professors could. Really appreciate your clever visualizations of what's going on "under the hood" of the ML/DL algo's. Great videos, awesome teacher!
Your teaching styles and process is so good. I didn't get distracted to the whole video. Thank you sir. For teaching such valuable things in such a way.
Just subscribed to this channel after watching this video. Wonderful explanations combined with excellent visuals. Had difficulty in understanding RL, your video made me understand it better. Thank you.
Thanks for such a great video! Your visual descriptions combined with your explanations really presented a wonderful conceptual understanding of Deep-RL fundamentals.
WTF!!! Like I am half way through and I am already blown by the way you explain content. This has been the best video so far explaining RF..... Wow. New sub❤❤😅
I am not stupid but AI still does not come easy to me. Sometimes I wonder, besides having more slots in the working memory, how a genius or simply more intelligent people think about this subject so that it comes more naturally to them. I feel like this video was a very good insight on how easy such a complicated topic can appear, if you just have a very good intuitive understanding for abstract concepts. Very nicely done!
Thank you! Definitely! Here's a playlist of applications of RL to training large language models. ua-cam.com/play/PLs8w1Cdi-zvYviYYw_V3qe6SINReGF5M-.html
@Luis Serrano - thanks for this. Excellent! At 30:15 shouldnt (4,0) be -2 and hence (4,1) be -3 and so on. A Query on policy train: If you freeze video at 28:52, and look at the table. I see it as random walk where you end up to a reward location, and kind of infer the value (subtracting 1) from next value point and come up with 3,2,..-1. Why would you say the one should decrease p(->) for 0,0 ? At 0,0 (or any chosen node on simulated path), the moves always increase the value (better value state), then change should never be "decrease"). Also while training the net you dont use "Change". Then why are we discussing "Change" at all ? Shouldn't it be simply the probability of actual step each step to be higher than rest as it points to path leading to a reward?
Fantastic ! Having watched many teachings on this subject , your explanation really made things clear. Now my fingers are itching to try it out and write some Delphi code. I will start with your grid-world first , but if that works I want to write a chess-engine. I have already written a chess-program using the alfa-beta algoritme and it will be fun to compare it with a neural-network based.
God damn it, this explanation was just straightforward, I loved it, it helped me to clarify many doubts I had, thanks :D Just how every explanation should be, concise and with practical examples.
Great lecture Mr. Serrano, thx. But some parts are inconsistent and confusing. For example at 29:49, for the state (3,1) the best action is to move left and agent went left. However you try to decrease its probability during training as seen in the table. That doesn't make sense.
Loved the video, quick question on the policy network section, because something still seems a little “disjointed” in the sense that the roles for both networks do not seem to be clear - I might be missing something… I don’t understand why we would use a decreasing/recursive “gain” function instead of just using the value network for the purpose of establishing values for the policy. Instead, doesn’t the value network already build in feedback mechanism that would be well suited to this?
You are one of the best teachers around. Thank you. What if the grid is different or the end goals change location? Do you need to start training over?
Thank you! Great question, If the environment changes in general you do have to start again. However, there may be cases in which you can piggy back from having learned the game in a simpler situation, so it depends.
at 34:17 the shown result says that at point (3,1) with gain -3 and direction Left should be penalized and decrease the weight for that action but actually that is the most efficient movement at such point. How can we reconcile that?
cool...it took so long to drop this vid..I was earlier expecting RL videos from your site..but then I turned to Prof Oliver Siguad and completed RL there..Now I understand how DDPG works and internals of it..But I defintiley would want to see your take and perspective on this topic..So here I go again to watch this Video on RL ....
Thank you. Question: In the last section you use the term 'gain' but actually use the 'value' function i believe. Shouldn't the gain be the the difference of the value (in your example, always positive 1 then)? The gained value associated with the given action?
Hello. In 22:40, after having used an initial random value of 0.2 for the state with coordinates 2,3, how did you find the value of the neighboring states (4.9, 3.2, 1.3, -2.7) the first time? was this also random ?
Great question! Yes these numbers I picked randomly. The point is that these may be values that a large neural network would output. And I tried to make them really wrong, so that we see that the neural network is not well trained, and we need to get a loss function that notices this.
Hi, I am having a hard time understanding how in 29:49 we have change as decrease for the bottom 3 ones. For example 7th row with gain -2, it has as the change decrease but we actually increased by 1. Could someone elaborate on this?
I have a question in training the Policy Neural Networks. Why are we calling it a gain? And if it is a gain, it's a gain relative to which reference position? Because if you take a sufficiently long path, the starting position will keep getting large negative values and that position might become an undesirable position to go to. But if that position lies on the optimal path to the best possible outcome, that could be a problem because of negative value associated with it as a result of the training. Example, if a path from (5,4) goes left till it hits the boundary, goes up and goes right till hits the +5, the (5,4) location gets a -6 gain, which would make (5,4) undesirable even though it's 1 step away from +5. Are we saying that the probability of such a path occurring will be disproportionately compensated by the probability of more conducive paths occuring that weight location (5,4) much more positively? Also, this man probably has the most clear understanding of these concepts to the extent that he can explain them so clearly and lucidly. Hands down, the best explanation for beginners and intermediates. Excellent work! 👏🏽👏🏽
-6 would be the gain of the policy move ie moving left from (5,4) which is indeed an undesirable move. it is not a value of the cell, but of the direction of movement.
Great video, a question: if i go for value network, do i still need the policy network too or vice versa? because by having only one of them, i can get to my target? thanks in advanced
This is simply GREAT! I would love to follow more video on the issue of Reinforcement Learning. By the way I'm really enjoying your book Grokking Machine Learning, but I would like to know more on RL
Hands down, this explanation of reinforcement learning is like winning a dance-off against a robot-smooth, on point, and utterly unbeatable!
Thanks! Lol, I love it!
Absolutely awesome explanation! Ive been struggling to learn this concept because other tutorials were focusing on the wrong aspects of Q-Learning and didnt get the message accross. Yours, on the other hand, did an excellent job by starting with the interpretation of the Bellman Equation and giving an intuitive visual explanation! Wonderful tutorial
I've been bingewatching your "friendly intro to" series since yesterday and it has been amazing. I've worked with ML models as part of my studies and my work over the past two years, but even so, you've enriched my conceptual understanding by so much more than any of my professors could. Really appreciate your clever visualizations of what's going on "under the hood" of the ML/DL algo's. Great videos, awesome teacher!
Thank you, so happy to hear you’re enjoying the series! :)
Yeah
Yes@@ROC-d8c
This is probably the most friendliest video and still covering all important concepts of RL, thank you
Fantastic as always! Whenever I want to learn a new concept in AI, I always start with Luis's video(s) on that. Thank you so much, Luis!
Your teaching styles and process is so good. I didn't get distracted to the whole video. Thank you sir. For teaching such valuable things in such a way.
I feel super fortunate to have come across your channel. You are doing an incredible job! Just incredible!
One of the best videos on RL algorithms. Kudos to Dr. Serrano.
Just subscribed to this channel after watching this video. Wonderful explanations combined with excellent visuals. Had difficulty in understanding RL, your video made me understand it better. Thank you.
Thanks for such a great video! Your visual descriptions combined with your explanations really presented a wonderful conceptual understanding of Deep-RL fundamentals.
This is the simplest explanation I have seen on RL! 😍
No words sir! Finally found another great yt channel ✨
Luis is master man of explaining complex things easily..!! thank you luis for such a great efforts
I'd like to thank the creators for this video. This is the best video to learn the basics of RL. Helped a lot in my learning path.
The wait ends! Thank you sir!
Excellent explanation. I dont know why this video has so low views. It deserves Billion views.
Wow. I can show this to my pre-school nephew and at the end of the video they will understand what RL is all about. Thanks.
I Can't pass Without appreciating this great great Lecture. Thanks Luis serrano. 😍
Oh my god. This was god level teaching. How I envy your real world students.
This is a perfect introduction. It goes from the specific then the general.
This by far the best video explaining DRL, many thanks!
Wonderful explanation. I think it by far the best I have seen!
Finally here it is....been waiting for this for ages! Thanks Luis! Regards from India
EXcellent tutorial. I have recommended this tutorial to my students.
This is the best tutorial so far for the Q learning .. Thank you so much 😍😍
Luis: congratulations! Again a very good video, very well explained and with a beautiful presentation. Thank you.
One of the clearest explanations of the topic that I saw. Excellent!
After a day of searching I found a great explanation 😀😀 thank you so much
WTF!!!
Like I am half way through and I am already blown by the way you explain content. This has been the best video so far explaining RF..... Wow. New sub❤❤😅
I love the analogy of the discount factor with the dollar depreciation
it's the best video that I've seen about this topic, thanks.
Absolutely amazing video! You are my saviour!
I am not stupid but AI still does not come easy to me. Sometimes I wonder, besides having more slots in the working memory, how a genius or simply more intelligent people think about this subject so that it comes more naturally to them. I feel like this video was a very good insight on how easy such a complicated topic can appear, if you just have a very good intuitive understanding for abstract concepts. Very nicely done!
Luis, great video. Thanks for putting this together explaining the most important concepts and terms in Reinforcement Learning.
Thank you for the wonderful video. Please add more practical example videos for the application of reinforcement learning.
Thank you! Definitely! Here's a playlist of applications of RL to training large language models. ua-cam.com/play/PLs8w1Cdi-zvYviYYw_V3qe6SINReGF5M-.html
excellente como siempre! thank you from an MSc AI student working on DQNs.
Absolutely brilliant, specially Nural network and loss function explanation
best explanation I've seen so far
Amazing work fellow Luis! Looking forward to more of your videos
Thanks!
Thank you so much for your contribution Aynur! And I'm so glad you like the video! :)
Huge thanks , for a nice and lucid content.
specially for how to train the network, loss function and how to create datasets.
I wish I had atleast my bachelors Math teacher like you but I would like to be like you for my students.
you're one of the best teachers I have ever come across!
Thanks
Wow, thank you so much for your kindness and generosity, @alexvass!
@Luis Serrano - thanks for this. Excellent!
At 30:15 shouldnt (4,0) be -2 and hence (4,1) be -3 and so on.
A Query on policy train: If you freeze video at 28:52, and look at the table. I see it as random walk where you end up to a reward location, and kind of infer the value (subtracting 1) from next value point and come up with 3,2,..-1. Why would you say the one should decrease
p(->) for 0,0 ?
At 0,0 (or any chosen node on simulated path), the moves always increase the value (better value state), then change should never be "decrease"). Also while training the net you dont use "Change". Then why are we discussing "Change" at all ?
Shouldn't it be simply the probability of actual step each step to be higher than rest as it points to path leading to a reward?
A great introduction! thank you sincerely for this great gem!
Fantastic ! Having watched many teachings on this subject , your explanation really made things clear.
Now my fingers are itching to try it out and write some Delphi code. I will start with your grid-world first , but if that works I want to write a chess-engine. I have already written a chess-program using the alfa-beta algoritme and it will be fun to compare it with a neural-network based.
You are a God-send. Thank you so much
God damn it, this explanation was just straightforward, I loved it, it helped me to clarify many doubts I had, thanks :D
Just how every explanation should be, concise and with practical examples.
This video is a gem. Thank you.
So comprehensive! Thank you!
very good video, its very clear what is deep reinforcement learning from the bottom
Great starting point for RL! Thank you.
Amazing explanation. Thank you, it gives me a good starting point on DRL
Amazing explanation! I was left confused after the MIT RL lecture but it finally made sense after watching this
Great lecture Mr. Serrano, thx. But some parts are inconsistent and confusing. For example at 29:49, for the state (3,1) the best action is to move left and agent went left. However you try to decrease its probability during training as seen in the table. That doesn't make sense.
Loved the video, quick question on the policy network section, because something still seems a little “disjointed” in the sense that the roles for both networks do not seem to be clear - I might be missing something…
I don’t understand why we would use a decreasing/recursive “gain” function instead of just using the value network for the purpose of establishing values for the policy. Instead, doesn’t the value network already build in feedback mechanism that would be well suited to this?
Truly great video and explanation! Loved that you went deep (haha) into the details of the neural network, thanks!
Thanks! Lol, I see what you did there! :D
very good video with excellent elaboration for the equation thanks you very much for this
I don't understand how to train the NN at 34:09, what are the features and what is the label?
this the best rl tutorial on internet
You are one of the best teachers around. Thank you. What if the grid is different or the end goals change location? Do you need to start training over?
Thank you! Great question, If the environment changes in general you do have to start again. However, there may be cases in which you can piggy back from having learned the game in a simpler situation, so it depends.
Very intuitive as always.
Expecting some more intuitions on semi supervised learning,energy models.
Thank you so much sir!!🙏
Nice vid with gr8 explanation on RL.
at 34:17 the shown result says that at point (3,1) with gain -3 and direction Left should be penalized and decrease the weight for that action but actually that is the most efficient movement at such point. How can we reconcile that?
Excellent video. Thank's a lot!!
cool...it took so long to drop this vid..I was earlier expecting RL videos from your site..but then I turned to Prof Oliver Siguad and completed RL there..Now I understand how DDPG works and internals of it..But I defintiley would want to see your take and perspective on this topic..So here I go again to watch this Video on RL ....
Yes agree, no clear explanation on this topic apart from this video , thanks a lot, it is awesome ! :)
Incredible video, I love the animations!
Fantastic explanation.
best vids on the subject for suuuuuuuure im mad that i didnt see it earlier nice broo
U rock dude! U just earned a new subscriber
You have made my day, thank you!
Thank you.
Question: In the last section you use the term 'gain' but actually use the 'value' function i believe. Shouldn't the gain be the the difference of the value (in your example, always positive 1 then)? The gained value associated with the given action?
wow, extremely good video my friend! Big thanks!
Wow - that was a very understandable explanation! Well done!
Hello. In 22:40, after having used an initial random value of 0.2 for the state with coordinates 2,3, how did you find the value of the neighboring states (4.9, 3.2, 1.3, -2.7) the first time? was this also random ?
Great question! Yes these numbers I picked randomly. The point is that these may be values that a large neural network would output. And I tried to make them really wrong, so that we see that the neural network is not well trained, and we need to get a loss function that notices this.
Very nice video! When you say that the label of the new dataset is a "big increase" or a "small decrease", what is that in practice? Just the gain?
You are a genius!! Thank you!
Hi, I am having a hard time understanding how in 29:49 we have change as decrease for the bottom 3 ones. For example 7th row with gain -2, it has as the change decrease but we actually increased by 1. Could someone elaborate on this?
I have a question in training the Policy Neural Networks.
Why are we calling it a gain? And if it is a gain, it's a gain relative to which reference position?
Because if you take a sufficiently long path, the starting position will keep getting large negative values and that position might become an undesirable position to go to.
But if that position lies on the optimal path to the best possible outcome, that could be a problem because of negative value associated with it as a result of the training.
Example, if a path from (5,4) goes left till it hits the boundary, goes up and goes right till hits the +5, the (5,4) location gets a -6 gain, which would make (5,4) undesirable even though it's 1 step away from +5.
Are we saying that the probability of such a path occurring will be disproportionately compensated by the probability of more conducive paths occuring that weight location (5,4) much more positively?
Also, this man probably has the most clear understanding of these concepts to the extent that he can explain them so clearly and lucidly. Hands down, the best explanation for beginners and intermediates. Excellent work! 👏🏽👏🏽
-6 would be the gain of the policy move ie moving left from (5,4) which is indeed an undesirable move. it is not a value of the cell, but of the direction of movement.
This is by far the best explanation.
Excellent video! Hoping for more on RL.
An excellent explanation, thank you
Excellent explanation! Thank you!
Thanks 🙏
Finally good video on RL
Great video, a question: if i go for value network, do i still need the policy network too or vice versa? because by having only one of them, i can get to my target? thanks in advanced
Thanks for sharing
Thanks, great video. Bought the book!
Great to hear, thank you! I hope you like it!
Great explanation
Wunderbar!!!
Excellent explaination
amazing explanation
This is simply GREAT! I would love to follow more video on the issue of Reinforcement Learning. By the way I'm really enjoying your book Grokking Machine Learning, but I would like to know more on RL
Just Fantastic. Thank you
Very well explained! Thank you
Excellent! Luis.
Gracias Wilmar!
Thanks for these videos Luis. Are these from a course?
Fantastic explanation!