Reinforcement Learning: Machine Learning Meets Control Theory
Вставка
- Опубліковано 17 тра 2024
- Reinforcement learning is a powerful technique at the intersection of machine learning and control theory, and it is inspired by how biological systems learn to interact with their environment. In this video, we provide a high level overview of reinforcement learning, along with leading algorithms and impressive applications.
Citable link for this video: doi.org/10.52843/cassyni.x2t0sp
@eigensteve on Twitter
eigensteve.com
databookuw.com
This video was produced at the University of Washington
%%% CHAPTERS %%%
0:00 Introduction
3:34 Reinforcement Learning Overview
7:30 Mathematics of Reinforcement Learning
12:32 Markov Decision Process
13:33 Credit Assignment Problem
15:38 Optimization Techniques for RL
18:54 Examples of Reinforcement Learning
21:50 Q-Learning
23:53 Hindsight Replay - Наука та технологія
Steve is a phenomenal lecturer, isn't he?
never seen a better one
very much so
He is!
Yessss
no, he is the most phenomenal one!! Respect
*"WELCOME BACK"*
I still have no idea as to who could possibly dislike these videos
u
@@phaZZi6461 I wanted to add a comment but 69 looks so good
\
I wish i knew this channel at the start of quarantine
I found about the channel just as quarantine had started. It was quite the treat.
Viewing reinforcement learning as time delayed supervised learning is a really good way of looking at it.
Indeed!
Great channel! Please record more videos on the edge of reinforcement learning and control theory. Congrats on your work.
Would love for a full series on how can we use RL to control real world dynamical systems!
This sweet spot between control theory and machine learning definitely interests me, especially applied to astrodynamical systems. Please, continue making these videos, Professor Brunton!
I just cannot express how grateful I am to prof Steve Brunton for posting these videos. Waking up at 6am to watch him explain is the most satisfying thing ever. Thank you! We all are grateful.
Just wanted to comment about how much I love these videos. Last year while applying for PhDs I was searching for passions. In a discussion with my friend (a computer scientist), I accidentally outlined genetic programming without knowing it. My friend told me so and I went researching. Found these videos and became enthralled. Now I have a PhD studentship in soft robotics and plan to use SINDy to help with modelling and control and honestly think that giving machines brains may be my future work too. Thanks Brunton, my passion was helped by your own.
That is amazing to hear! Helping people develop their passions is exactly why I do this!
Never clicked a video that fast 😆. Great content prof as always love it!
The lecture was very well constructed. Well done! As an electrical engineering student trying to specialize in ML I find that you really hit the mark when it comes to putting these though and convoluted topics together with examples.
Wow! I would love to see Prof take on RL topics!
This is THE BEST explanation on reinforcement learning over all the articles, books, or youtube videos, that I've seen so far. Period.
I am doing research on the Model-based RL for safety-critical systems; I really enjoy doing it. These are so cool. Thanks for making videos on this topic!
Is there something you dont know dude? You seem to be an expert on everything. You are such an inspiration.
Simply great subject and excellent presentation thank you prof for all your efforts
Every time he said "good", i felt appreciated for not giving up on a lecture whose subject is far, far away from mine and im pushing myself to try and learn the concept. thank you, steve.. much love.!
Professor you're awesome. My thesis topic is, deep reinforcement learning based robotic arm torque control. I love control theory and machine learning. Thx for your support.
Hi Professor Steve, Lovely presentation.
I just found out your channel and the contents you cover is a treasure to me!
Looks like I'm not the only one working on a video early in the morning! Really cool stuff, love the doggie!!
Waiting this topic from long time, your lectures are so clear. Thanks lot.
Explained in an understandable way and RL nicely connected to control theory!
Awesome lecture! Thanks Steve. I really enjoyed watching this!
I really love your content, please keep spoiling us!
These were the fastest 26 minutes! I learnt a lot and I'm looking forward to the python lab implementations of these concepts! Thank you very much for your work.
I am binge watching this chanel from past 3 hours
That's an awesome video indeed. A great introduction to RL!
phenomenal video, thank you
Dear Steve,
Im very, very grateful that I get to watch such extraordinary instructive videos for free!!! Thinking that elsewhere in the world people are killing others atm (as in Kabul), it gives me a lot of hope seeing how people like you just make the world a little better and allmost brings tears into my eyes. You have such great talent in teaching, thank you!
Steven lectures are great help to the society ❤
Amazing Clarity
It's really interesting to watch this video, although I have also studied and read it a few times, its boredom is hard to describe.
thank you teacher
you have created such high quality content that i just really enjoy watching it instead of playing games :)))
I love how you emphasize the intersection between machine learning and control (theory). That's exactely what sparks my interest about reinforcement learning!
Glad you like it! I always found this connection fascinating and a very natural way to merge the two fields.
Very well illustrated! Thanks
the fantastic lecture that I've ever seen...
Dude, you are the best lecturer. DONE
Perfect video, I will watch all other at one time 😍
it's brilliant ! . Keep working with this topic please
Thank you, professor Steve Brunton. I am pleased to inform you that I am considering to do, after my master degree in computer engineering, a PhD related to the Data Driven Control Theory subject and the merit in part is also your.
Great video! If everyone was as great on UA-cam as your delivery we would have a lot more passion in the area. Keep up the good work, train on!
Thank you so much for this lecture. I really enjoy your videos, this is helpful as a PhD student. I also bought your book "Data-driven science and engineering" which have nice explanations for the tools I use. Keep on this awesome work! Greetings from France!
Theeeere we go Steve! Waited for this :)
How very mean, I was looking forward to see the trial 7 right away. Great explaining. Thanks
Keep up the good work, love your videos
Hi, Steve. I've been working on Fluid Mechanics 25 years or so. Always using experimental and some analytical tools to approach the subject. I had a lot colleagues migrating to CFD back in the 2000s because these methods seem to find valid results with "little" effort in comparison to expensive, frustrating and time-consuming experiments. So I always disregarded CFD as nice tool that could predict a lot of stuff that you will never know if it is correct or not.
However, I have to say, that from some time, reinforced (see what I did there?) by new material that I am studying and your papers on ML for Fluid Mechanics I am looking at the subject with new eyes. Thank you very much for your material and the dedication you put in every video.
love your lectures
That's so interesting and well explained. Thank you !
Glad you liked it!
This is really really great teaching.
Thank you, professor!
I’ve been following your content for a at least 4 years now! It’s the reason I am a robotics control engineer now, you pulled me through 4th year control systems with your conveniently-timed boot camp. Please keep up the great content!
PS are you accepting PhD students?
Your series are excellent . They have a good pace and use powerful graphics to explain difficult concepts.
I've watched many of your videos on my TV which doesn't allow me to give a thumbs up. See here it is.
I am not a Python programmer , but I am sure that those watching who DO use Python must have itchy fingers.
Amazing feeling to watch a video After completing a project on the same topic
Great lesson.. Thank you
I think it's also important to mention the distinction between discrete and continuous action spaces.
I was waiting for this!!!
I really like your explanation
Great video, thanks.
thank you very much for your lesson, it is really useful to me!
What I like is I don't pay for this knowledge. I was planning to take a data science certificate, but you know what. Let me spend 6 months learning by myself I have spent a solid 1 month only on your videos starting from SVD. it has been amazing. I love when a small thing builds up into a bigger thing. Soon I will make a sample project based on what I have learned from your video.
the best one i have ever seen
Thank you ❤
As a CS grad student who took RL in the last semester ... this is truly the best refresher I have seen until now. Thanks a lot for uploading.
Great to hear!
You have a nice way of explaining the topics.
Thanks!
Kudos on the awesome lecture
I really like your videos. Keep up the good work! :)
Mr.Brunton saves me from my final review. His lectures made crystal clear those seemingly unfathomable terms. Just watched him videos for days and I already like him!
those bipedals are too cute they deserve another cmt
Excellent video ❤️
top quality
this is what they said about education on the internet that "the best teacher can teach everyone"
this is that video for this topic
nice lecture sir! thanks a lot!
we also look forward to your explanation for GAN in the future
Hi Steve. I am an amateur mathematician (hoping to go pro) who is really into category theory. Have you or your team ever looked at this? Usually, when you see two subjects talking about the same thing, it's a good bet that category theory is working in the background. And I just looked at category theorist Tai-Danae Bradley and her explanation of SVD in terms of category theory. Thanks! AWESOME CHANNEL!
Yay! Hero has decided to teach Reinforcement Learning
Steve is one of the gifted teachers. I wish you can guide postgraduate to make a good publication in control and learning by highlighting the hot topics and promising research aspects.
Thanks so much!
i have been trying to teach my guys that machine learning and control theory (fuzzy autotuning) is the same principle. This video will be used!
RL can be interpreted from this perspective, amazing
I've been seriously considering starting a degree in A.I./Machine learning but with videos of this quality available for free, it is hard to justify the cost. Subscribed and liked!
Just incase you read this and have time to reply... Do you have any suggestions for an education path to your level of understanding? There are degrees for data science, computer science, artificial intelligence, software engineering, etc. They all seem so inter-related. I want to know them all but I'm struggling to pick a starting point.
My current level of related education is highschool level advanced maths and a year of teaching myself MQL4/5 and R code mostly from free resources online. Just so you know my starting point (or state haha).
All of your lecture series are very good and very helpful. A series on convex optimization problems would be good. Any thoughts about it?
I love u, Steve! I have been currently working on Machine Teaching and Project Bonsai. I really needed to know this.
A very well done lecture. Bravo!
I'd like to make a suggestion, if I may, to modify the Policy function as
pi(s,a) = Pr(A = a, S = s); A is the place holder for an action, and a is the actions of taking; S is the place holder for the state and s is the given state.
Thank you!
Legendary!!!!
Glad this content is on UA-cam -- the past year kind of derailed me going to grad school.
Question, could the reward structure of a chess game be broken into incremental steps? As in, the main reward is to win, but couldn't a game be discretized into incremental rewards defined by the value of a target and the probability that a sequence of moves would capture a high value target? Or, is that just Q-Learning in different words?
I thought I was witnessing a breakthrough concept trying to link deterministic control theory with machine learning. But, when you mentioned the words probability and policy, I was disappointed. Looking forward to more conceptual lectures. Could also highlight real world applications. Thanks.
amazing work as usual !! ... could you please consider doing a lecture about whole body control for robotics?
Amazing 🤩
The essence of this content is profoundly influential. A book with akin messages was transformative. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
After watching these videos I have actually understood the concept of reinforcement learning. I might be wrong but to me it seems it generalizes the feedback loops into more abstract concepts of agent action policy environment etc. In a feedback loop we have control policy which is a PID controller that controls the behaviour of a plant it is attached to. The model of the plant is environment here and the action is taken by the output of the PID controller. The reward in feedback loop is to converge to desired output value at the steady state by ignoring its transition time values, so it is in a sense a semi supervised learning. The states in the feedback loop is derivative components of the system. In noisy systems, sometimes it is crucial to remove derivative component to avoid impulsive behaviour which corresponds to state feedback from environment to agent in RL. By thinking like this, RL is more meaningful to me as an engineer, that RL is a generalized feedback system where we try to get a desired output given some input to the system. Thank you for these video series!!
Thanks Steve.
You are very welcome!
I wish my teachers had seen your videos before trying to teach us these subjects :)
hey Steve, love your videos! Wondering if the videos in this playlist are in the correct order?
Love the videos
Nice!
Wow! Thank you so much.
Maybe the next lecture can be about UMAP please :D?
Prof. Steve. Please make videos on Immersed Boundary Methods and how PINNs can help to solve complex problems in this area.
This guy is super smart!
just a question, i am viewing the Control boot camp playlist, so it goes from control dynamic systems with non minimum phase to control theory and convid 19 to reinforcement learning. is this the correct way of viewing the videos? i feel there was for to talk about on the previous topics maybe i am wrong.
Thanks to you/your team for all your amazing videos. i finished my controls classes 3 years ago when i found ur videos, i have been going throught all your playlists and am loving it!
Is there a specific job that requires/teaches these skills? the closest job to this i found were graduate automation engineer .
Hey Steve! Loved your lecture! Could you tell me what your setup is? I love your production, setup, and content of course!
Some questions:
1. Do you have a screen/script in front of you and a green screen behind?
2. Which cam and mic do you use? Is it only a lav mic? I assume it's not shotgun since you're far away from any particular point of the frame.
3. How much time does it take to create a video like this one?
4. How many dry runs do you usually do? Or for this video in particular?
You're setting a new standard for production (and beyond haha), keep up the good work!
I'd really appreciate your answers, thank you in advance!
Thanks, glad you like it! No script, but I have a screen so I can see where I am relative to the presentation. I use a lav mic and a canon 4k camera. I usually do everything in one run, sometimes I redo the intro a couple times until i'm happy with it.
@@Eigensteve thanks Steve!
Hi Steve, I am an Mechatronics student from Germany and currently working at a company that works on Servo Drives. In a few months I am going to start my master thesis in the company and I was asking myself if I could suggest them a topic like „Building an optimal Servo controler with RL to replace the cascaded motor control loop“. It would be nice to hear your opinion on that and if something related to RL makes sense at all because everything should run on a Mikrocontroller (most commonly STM32).