Absolutely awesome explanation! Ive been struggling to learn this concept because other tutorials were focusing on the wrong aspects of Q-Learning and didnt get the message accross. Yours, on the other hand, did an excellent job by starting with the interpretation of the Bellman Equation and giving an intuitive visual explanation! Wonderful tutorial
I've been bingewatching your "friendly intro to" series since yesterday and it has been amazing. I've worked with ML models as part of my studies and my work over the past two years, but even so, you've enriched my conceptual understanding by so much more than any of my professors could. Really appreciate your clever visualizations of what's going on "under the hood" of the ML/DL algo's. Great videos, awesome teacher!
This was an amazing introduction to the topic! Although there still some things I could not understand, but the way you explained everything using simple examples and terms made a big difference. Thanks!
Your teaching styles and process is so good. I didn't get distracted to the whole video. Thank you sir. For teaching such valuable things in such a way.
Thanks for such a great video! Your visual descriptions combined with your explanations really presented a wonderful conceptual understanding of Deep-RL fundamentals.
Just subscribed to this channel after watching this video. Wonderful explanations combined with excellent visuals. Had difficulty in understanding RL, your video made me understand it better. Thank you.
WTF!!! Like I am half way through and I am already blown by the way you explain content. This has been the best video so far explaining RF..... Wow. New sub❤❤😅
Fantastic ! Having watched many teachings on this subject , your explanation really made things clear. Now my fingers are itching to try it out and write some Delphi code. I will start with your grid-world first , but if that works I want to write a chess-engine. I have already written a chess-program using the alfa-beta algoritme and it will be fun to compare it with a neural-network based.
God damn it, this explanation was just straightforward, I loved it, it helped me to clarify many doubts I had, thanks :D Just how every explanation should be, concise and with practical examples.
I am not stupid but AI still does not come easy to me. Sometimes I wonder, besides having more slots in the working memory, how a genius or simply more intelligent people think about this subject so that it comes more naturally to them. I feel like this video was a very good insight on how easy such a complicated topic can appear, if you just have a very good intuitive understanding for abstract concepts. Very nicely done!
Thank you! Definitely! Here's a playlist of applications of RL to training large language models. ua-cam.com/play/PLs8w1Cdi-zvYviYYw_V3qe6SINReGF5M-.html
cool...it took so long to drop this vid..I was earlier expecting RL videos from your site..but then I turned to Prof Oliver Siguad and completed RL there..Now I understand how DDPG works and internals of it..But I defintiley would want to see your take and perspective on this topic..So here I go again to watch this Video on RL ....
Loved the video, quick question on the policy network section, because something still seems a little “disjointed” in the sense that the roles for both networks do not seem to be clear - I might be missing something… I don’t understand why we would use a decreasing/recursive “gain” function instead of just using the value network for the purpose of establishing values for the policy. Instead, doesn’t the value network already build in feedback mechanism that would be well suited to this?
This is simply GREAT! I would love to follow more video on the issue of Reinforcement Learning. By the way I'm really enjoying your book Grokking Machine Learning, but I would like to know more on RL
You are one of the best teachers around. Thank you. What if the grid is different or the end goals change location? Do you need to start training over?
Thank you! Great question, If the environment changes in general you do have to start again. However, there may be cases in which you can piggy back from having learned the game in a simpler situation, so it depends.
@Luis Serrano - thanks for this. Excellent! At 30:15 shouldnt (4,0) be -2 and hence (4,1) be -3 and so on. A Query on policy train: If you freeze video at 28:52, and look at the table. I see it as random walk where you end up to a reward location, and kind of infer the value (subtracting 1) from next value point and come up with 3,2,..-1. Why would you say the one should decrease p(->) for 0,0 ? At 0,0 (or any chosen node on simulated path), the moves always increase the value (better value state), then change should never be "decrease"). Also while training the net you dont use "Change". Then why are we discussing "Change" at all ? Shouldn't it be simply the probability of actual step each step to be higher than rest as it points to path leading to a reward?
Great lecture Mr. Serrano, thx. But some parts are inconsistent and confusing. For example at 29:49, for the state (3,1) the best action is to move left and agent went left. However you try to decrease its probability during training as seen in the table. That doesn't make sense.
Hands down, this explanation of reinforcement learning is like winning a dance-off against a robot-smooth, on point, and utterly unbeatable!
Thanks! Lol, I love it!
This is probably the most friendliest video and still covering all important concepts of RL, thank you
Absolutely awesome explanation! Ive been struggling to learn this concept because other tutorials were focusing on the wrong aspects of Q-Learning and didnt get the message accross. Yours, on the other hand, did an excellent job by starting with the interpretation of the Bellman Equation and giving an intuitive visual explanation! Wonderful tutorial
I've been bingewatching your "friendly intro to" series since yesterday and it has been amazing. I've worked with ML models as part of my studies and my work over the past two years, but even so, you've enriched my conceptual understanding by so much more than any of my professors could. Really appreciate your clever visualizations of what's going on "under the hood" of the ML/DL algo's. Great videos, awesome teacher!
Thank you, so happy to hear you’re enjoying the series! :)
Yeah
Yes@@ROC-d8c
Fantastic as always! Whenever I want to learn a new concept in AI, I always start with Luis's video(s) on that. Thank you so much, Luis!
This was an amazing introduction to the topic! Although there still some things I could not understand, but the way you explained everything using simple examples and terms made a big difference. Thanks!
I feel super fortunate to have come across your channel. You are doing an incredible job! Just incredible!
One of the best videos on RL algorithms. Kudos to Dr. Serrano.
Your teaching styles and process is so good. I didn't get distracted to the whole video. Thank you sir. For teaching such valuable things in such a way.
Thanks for such a great video! Your visual descriptions combined with your explanations really presented a wonderful conceptual understanding of Deep-RL fundamentals.
This is the simplest explanation I have seen on RL! 😍
No words sir! Finally found another great yt channel ✨
Just subscribed to this channel after watching this video. Wonderful explanations combined with excellent visuals. Had difficulty in understanding RL, your video made me understand it better. Thank you.
Luis is master man of explaining complex things easily..!! thank you luis for such a great efforts
I'd like to thank the creators for this video. This is the best video to learn the basics of RL. Helped a lot in my learning path.
This is a perfect introduction. It goes from the specific then the general.
Wow. I can show this to my pre-school nephew and at the end of the video they will understand what RL is all about. Thanks.
Wonderful explanation. I think it by far the best I have seen!
I Can't pass Without appreciating this great great Lecture. Thanks Luis serrano. 😍
This by far the best video explaining DRL, many thanks!
Excellent explanation. I dont know why this video has so low views. It deserves Billion views.
One of the clearest explanations of the topic that I saw. Excellent!
Oh my god. This was god level teaching. How I envy your real world students.
This is the best tutorial so far for the Q learning .. Thank you so much 😍😍
The wait ends! Thank you sir!
Luis, great video. Thanks for putting this together explaining the most important concepts and terms in Reinforcement Learning.
I love the analogy of the discount factor with the dollar depreciation
EXcellent tutorial. I have recommended this tutorial to my students.
After a day of searching I found a great explanation 😀😀 thank you so much
Finally here it is....been waiting for this for ages! Thanks Luis! Regards from India
Luis: congratulations! Again a very good video, very well explained and with a beautiful presentation. Thank you.
WTF!!!
Like I am half way through and I am already blown by the way you explain content. This has been the best video so far explaining RF..... Wow. New sub❤❤😅
Amazing work fellow Luis! Looking forward to more of your videos
Absolutely brilliant, specially Nural network and loss function explanation
it's the best video that I've seen about this topic, thanks.
you're one of the best teachers I have ever come across!
Absolutely amazing video! You are my saviour!
best explanation I've seen so far
Amazing explanation! I was left confused after the MIT RL lecture but it finally made sense after watching this
excellente como siempre! thank you from an MSc AI student working on DQNs.
Fantastic ! Having watched many teachings on this subject , your explanation really made things clear.
Now my fingers are itching to try it out and write some Delphi code. I will start with your grid-world first , but if that works I want to write a chess-engine. I have already written a chess-program using the alfa-beta algoritme and it will be fun to compare it with a neural-network based.
A great introduction! thank you sincerely for this great gem!
Huge thanks , for a nice and lucid content.
specially for how to train the network, loss function and how to create datasets.
Thanks
Wow, thank you so much for your kindness and generosity, @alexvass!
God damn it, this explanation was just straightforward, I loved it, it helped me to clarify many doubts I had, thanks :D
Just how every explanation should be, concise and with practical examples.
I am not stupid but AI still does not come easy to me. Sometimes I wonder, besides having more slots in the working memory, how a genius or simply more intelligent people think about this subject so that it comes more naturally to them. I feel like this video was a very good insight on how easy such a complicated topic can appear, if you just have a very good intuitive understanding for abstract concepts. Very nicely done!
I wish I had atleast my bachelors Math teacher like you but I would like to be like you for my students.
So comprehensive! Thank you!
Thanks!
Thank you so much for your contribution Aynur! And I'm so glad you like the video! :)
Very intuitive as always.
Expecting some more intuitions on semi supervised learning,energy models.
Thank you so much sir!!🙏
Thank you for the wonderful video. Please add more practical example videos for the application of reinforcement learning.
Thank you! Definitely! Here's a playlist of applications of RL to training large language models. ua-cam.com/play/PLs8w1Cdi-zvYviYYw_V3qe6SINReGF5M-.html
Amazing explanation. Thank you, it gives me a good starting point on DRL
Great starting point for RL! Thank you.
Truly great video and explanation! Loved that you went deep (haha) into the details of the neural network, thanks!
Thanks! Lol, I see what you did there! :D
very good video, its very clear what is deep reinforcement learning from the bottom
This video is a gem. Thank you.
Wow - that was a very understandable explanation! Well done!
very good video with excellent elaboration for the equation thanks you very much for this
Nice vid with gr8 explanation on RL.
You are a God-send. Thank you so much
best vids on the subject for suuuuuuuure im mad that i didnt see it earlier nice broo
Incredible video, I love the animations!
U rock dude! U just earned a new subscriber
Excellent video. Thank's a lot!!
Yes agree, no clear explanation on this topic apart from this video , thanks a lot, it is awesome ! :)
Thanks 🙏
Excellent video! Hoping for more on RL.
wow, extremely good video my friend! Big thanks!
this the best rl tutorial on internet
cool...it took so long to drop this vid..I was earlier expecting RL videos from your site..but then I turned to Prof Oliver Siguad and completed RL there..Now I understand how DDPG works and internals of it..But I defintiley would want to see your take and perspective on this topic..So here I go again to watch this Video on RL ....
Excellent explanation! Thank you!
Fantastic explanation.
very good explained you deserve a like :)
You have made my day, thank you!
This is by far the best explanation.
An excellent explanation, thank you
Wunderbar!!!
Thanks, great video. Bought the book!
Great to hear, thank you! I hope you like it!
Very well explained! Thank you
i fucking move this video. oh my goodness... the level of satisfaction of understanding something that i struggled to grasp ;)
You are a genius!! Thank you!
great video. easy explanation! thank you.
Great explanation
Excellent explaination
Fantastic explanation!
I don't understand how to train the NN at 34:09, what are the features and what is the label?
great explanation. thank you!
Excellent! Luis.
Gracias Wilmar!
amazing explanation
Finally good video on RL
Loved the video, quick question on the policy network section, because something still seems a little “disjointed” in the sense that the roles for both networks do not seem to be clear - I might be missing something…
I don’t understand why we would use a decreasing/recursive “gain” function instead of just using the value network for the purpose of establishing values for the policy. Instead, doesn’t the value network already build in feedback mechanism that would be well suited to this?
This is simply GREAT! I would love to follow more video on the issue of Reinforcement Learning. By the way I'm really enjoying your book Grokking Machine Learning, but I would like to know more on RL
Thanks for these videos Luis. Are these from a course?
Thanks for sharing
amazing video, good work!
Just Fantastic. Thank you
You are one of the best teachers around. Thank you. What if the grid is different or the end goals change location? Do you need to start training over?
Thank you! Great question, If the environment changes in general you do have to start again. However, there may be cases in which you can piggy back from having learned the game in a simpler situation, so it depends.
@Luis Serrano - thanks for this. Excellent!
At 30:15 shouldnt (4,0) be -2 and hence (4,1) be -3 and so on.
A Query on policy train: If you freeze video at 28:52, and look at the table. I see it as random walk where you end up to a reward location, and kind of infer the value (subtracting 1) from next value point and come up with 3,2,..-1. Why would you say the one should decrease
p(->) for 0,0 ?
At 0,0 (or any chosen node on simulated path), the moves always increase the value (better value state), then change should never be "decrease"). Also while training the net you dont use "Change". Then why are we discussing "Change" at all ?
Shouldn't it be simply the probability of actual step each step to be higher than rest as it points to path leading to a reward?
Totally agree. I also feel confused about this point
Great lecture Mr. Serrano, thx. But some parts are inconsistent and confusing. For example at 29:49, for the state (3,1) the best action is to move left and agent went left. However you try to decrease its probability during training as seen in the table. That doesn't make sense.
Thank-you for this 🙏