Your lectures on dynamic systems, advanced solution techniques for PDE's, and control theory have been a goldmine for me, I discovered your channel 3 days ago and have watched in excess of 12 hours of your content since then. I'm a junior, and my goal this summer is to move the eigenvalues of my dynamic systems competence matrix to the Right half plane in SU3
Wow Steve. Your channel is incredible. I am a ME student at Purdue, and I love learning these kinds of things. I have learned so much from your channel! Thank you!
Thank you so much. I have been wondering what it was for a while. In 10 minutes you explained it very well. Its much easier to understand the the basic idea behind the concepts concepts from your videos than from papers/books.
This is the first lecture on MPC that I have seen that actually made sense to me and allowed me to understand what MPC is. The previous ones I attended were a soup of symbols and jargon with no meaning to me. Thank you so much.
Finally I made it to the last lecture of this series! After some revision I will start the Data-Driven Dynamical Systems with Machine Learning series. Thank you so much Steve!
Yh he's left handed and the video is flipped. But really cool, was looking this up. Apparently the model is based upon how the brain makes decisions. Heavy.
I work in MPC for autonomous cars... Things will always be easy and smooth in simulation... Closed loop on real time platform is where i m facing the heat....😬 That is why i love control systems ..✌️
@@hamidalshareefi1622 yes ... we have implemented MPC technique for a use case in autonomous cars on microcontroller (real time target) ... algorithm is very expensive in terms of computation as we have to run it every 20ms ...
Brilliantly explained!!. and I'm looking forward to your lecture on Markov decision process. I'm working on implementing a hybrid controller by combining the benefits of both MPC and Markov Decision Process. I hope this hybrid controller would be more efficient in terms of computational time!!
So, 5 years ago this was your last video about MPC under "Control Bootcamp" series. So unfortunately, MPC stopped here with Professor Steve! Anyway, thank you for the clear explanation.
When he writes normally on the glass screen, it looks normal to him, but inverted to camera, then he inverts the video laterally so it looks normal to us
Thank you for sharing. However, I have a question, please. I am currently implementing an MPC to control the temperature inside a room. To model the system, I used a neural network that takes as input a window of data (disturbance_w, control_w, output_w) to predict the output over a prediction horizon. Then, I use these predictions to calculate an objective function in order to obtain the first command to apply to my system to get the first output. For this, I use scipy, but the control proposed by this library remains constant regardless of the output values (the output does not follow the reference). Do you have any advice to improve this?
Great video, great as always :) However, I am missing the important point, that *if* we need to take constraints into account, we can't use an LQR since it controls an unconstrained system to a zero state, this is why we need to implement a MPC. Thanks again for your videos!
Two questions: - Why do we bother optimizing over the entire horizon when only the t+1 step is necessary? Unless the optimization solution is generated starting from the end of the horizon backwards. - Does it often happen nowadays to have real-time systems running their optimizations over the air on a powerful backend (cloud),. What types of systems are suitable given the network latencies of doing so?
1: You need to know how the system will behave given different inputs over a longer time than just t+1 to optimize t+1. 2: Most systems need faster response times than they could provide, and in cases where the optimization takes a long time, there are few benefits to having the optimization off-prem.
Thanks for the lecture professor. I have one question could you please clarify? around 2:45, I thought the first control at k+1 we locked in should be applied to system at k+1, not at k in the above plot -- because at initial time k, Xk is given to calculate (by optimization) the controls of Uk(not useful), Uk+1, Uk+2, etc to minimize the objective within prediction horizon, but the optimization takes time, it's only practically to apply Uk+1 to system at k+1, not at k. Similarly, at k+1, Xk+1 is known, and Uk+1(calculated at k) is applied to the system, and Uk+2 is to be calculated. This process repeats.
Thank you for the helpful explanation. But I have two questions: 1) Let's say I'm controlling the speed of a motor, which is running a variable load. In the model, the load is considered a disturbance and is not included in the differential equations. So, if for example, the load is suddenly removed from the motor, the speed will increase quickly. How can this optimization compensate for such disturbances if they are not included in the models. PI controllers can compensate that because their outputs are dependent on the error signal. 2) How is this optimization carried out?
Great questions. Answering in reverse order: 2) there are tons of approaches to this optimization, and I can't begin to list them here. But there are several good open source software packages out there. You can check out github.com/eurika-kaiser/SINDY-MPC for one approach. 1) This is the beauty of MPC. You are constantly taking a measurement of where the system actually is, so once it gets close enough to the set point, the MPC controller will stop pushing in this direction. I would encourage trying to code this example up, as it would be very relevant.
@@Eigensteve, I am just a beginner here but if MPC uses LQR as its optimizer, then that does contain the state term which represents errors. So, just as you can make system response to errors faster or slower depending on the integral gain, can't you also do that by changing matrix Q?
Great lecture. I am wondering if you are using the same computation device to measure the signal, as well as generate the optimal best path, then could there be a tradeoff for re-calculating the optimal trajectory at every step? Or for too many steps ahead of time at?
Steve, thank you for the great lecture series. You'll be happy to know that people are still sitting down and watching the whole series. I've seen you answer relatively recent comments in some other video, so if you see this, since you mention a number of times where you would make your students prove an equation or another, I was wondering at what level of learning you'd place the material in this bootcamp? I'm a PhD candidate in Wind Engineering so I have pretty much no background in control engineering, and it was easy enough to follow, but I know that this would have been incomprehensible for me just a few years ago. For software or mechanical engineers, would this material be covered in undergrad? Or are these graduate-level concepts?
It does kind of feel like that, since the algorithm can adjust its behavior. But having a model in MPC makes it very powerful and often more responsive.
Could you please also upload a detailed video lecture on MPC? Your Control Boot Camp lectures are very useful in revising and honing the fundamental concepts in Control Systems Engineering.
@Steve Brunton Suppose we have a nonlinear dynamics and obtained a linear representation by Koopman operator. Is it ok to apply to Koopman linear representation? If it's ok , which is more reliable? linearize on equilibrium point or koopman?
Can you use information about the deviations of predicted output compared to eventually measured output to improve the model over time to reduce disturbances (mainly caused by model inaccuracies)?
You have nicely explained a complex topic. I've designed an MPC controller for a power converter. But don't know how to tune the controller. Because the variation in the load affects it's performance. Could you please suggest something regarding this?
Great video Steve! Does anyone have a suggestion for creating a system model to use with MPC? I have a large amount of historical data for a mechanical vapor recompression system.
Dear professor, I am working on fuel cell control and optimization, I am interested in MPC , how to start ? Moreover if I simulate a fuel cell and extract input output data for MPC will be a valid case ? Mostly MPC needs experimental data . Little confused . Need your kind help
Thanks -- I will probably do a video on "Deep MPC" (MPC with deep learning) soon. To start, I would actually recommend just downloading some code and playing around. Eurika Kaiser has a nice Github repo: github.com/eurika-kaiser/SINDY-MPC
hi... can I know what is the difference between model based control and model predictive control?? and is the model-based is one approach like Zigler and Niglust??
Thank you for the amazing video on this topic. I am interested in adding stochasticity to nonlinear MPC, can you please refer me to some references on this topic?
Hey Steve, I have a doubt about the process. A horizon is chosen, a minimization is made similar to the LQR of a time t_0 to a time t_f, and a value of K is calculated. This K, with u = -Kx, is applied and the state is calculated. In the next iteration, the new information is added to the model, so that the new K is calculated, which may be different from the previous one. Again, this u = -Kx control law is implemented. Is that the logic of the process? In general, is it like to use the LQR for each iteration? So that, if nothing in the model changes, would we have an equal K in each iteration and the process would be nothing more than an LQR?
Thank a lot for this video. Really easy to understand and looking forward to future videos. Would you be making any videos on deep learning for MPCs or differentiable MPCs?
Sir can you please make more lectures of model predictive control as there are few lectures available and it's a new topic for research. please we need good content with matlab examples
Thanks for this video ! Could you perhaps do a video of a matlab example of application of MPC on a linear model of system identification ? In practice, we usually only have data measurement of a system so both A and B are unknown in our model. Would be great to do a full example considering this. Thanks for your work !
Your lectures on dynamic systems, advanced solution techniques for PDE's, and control theory have been a goldmine for me, I discovered your channel 3 days ago and have watched in excess of 12 hours of your content since then. I'm a junior, and my goal this summer is to move the eigenvalues of my dynamic systems competence matrix to the Right half plane in SU3
Update?
Wow Steve. Your channel is incredible. I am a ME student at Purdue, and I love learning these kinds of things. I have learned so much from your channel! Thank you!
Boiler Up
Thank you for this video. Made a complex topic really easy to understand.
Thank you so much. I have been wondering what it was for a while. In 10 minutes you explained it very well. Its much easier to understand the the basic idea behind the concepts concepts from your videos than from papers/books.
Glad it helped!
I'll think I'm gonna re-watch this playlist in a few weeks. So many insights and so great to follow. Thank you a hundred times.
You have presented a good and to-the-point introduction of MPC...
This is the first lecture on MPC that I have seen that actually made sense to me and allowed me to understand what MPC is. The previous ones I attended were a soup of symbols and jargon with no meaning to me. Thank you so much.
Those lectures are the epitome of math teachers teaching engineers
Finally I made it to the last lecture of this series! After some revision I will start the Data-Driven Dynamical Systems with Machine Learning series. Thank you so much Steve!
You are writing backwards, reading it backwards, and still explaining things clearly... well done and thank you.
I wonder how he does that... I think the video might be inverted but...
It's inverted and he's left handed
Once a video finished , I thinks it's just flipped and posted!
Yh he's left handed and the video is flipped. But really cool, was looking this up. Apparently the model is based upon how the brain makes decisions. Heavy.
You’re kind of stupid aren’t you
best video on MPC period.
Simple modeling , but the point what we should understand this complex environements.
You are brave . thanks a lot.
I have been seen your lecture since I admitted in my master course of autonomous vehicles engineering. After a some time ,wish to do PhD under you!
I work in MPC for autonomous cars...
Things will always be easy and smooth in simulation...
Closed loop on real time platform is where i m facing the heat....😬
That is why i love control systems ..✌️
Hello, did you work with MPC in real-time?
@@hamidalshareefi1622 yes ... we have implemented MPC technique for a use case in autonomous cars on microcontroller (real time target) ... algorithm is very expensive in terms of computation as we have to run it every 20ms ...
This reminds me of some reinforcement learning ideas related to adjusting input in real-time. interesting!
Yep, lots of neat connections here!
That's because RL is optimal control.
Brilliantly explained!!.
and I'm looking forward to your lecture on Markov decision process.
I'm working on implementing a hybrid controller by combining the benefits of both MPC and Markov Decision Process.
I hope this hybrid controller would be more efficient in terms of computational time!!
So, 5 years ago this was your last video about MPC under "Control Bootcamp" series. So unfortunately, MPC stopped here with Professor Steve! Anyway, thank you for the clear explanation.
Sir, your lectures are amazing. Kindly make video on event based event triggered pid control
Lateral inverted writing skill. Wow!
I guess the video is inverted too.
When he writes normally on the glass screen, it looks normal to him, but inverted to camera, then he inverts the video laterally so it looks normal to us
Thank you! My prof can't explain it as well as you. Saving me a part of my exam on friday :D
what a fantastic explanation! Thanks
Really well explained!
Thank you for sharing. However, I have a question, please. I am currently implementing an MPC to control the temperature inside a room. To model the system, I used a neural network that takes as input a window of data (disturbance_w, control_w, output_w) to predict the output over a prediction horizon. Then, I use these predictions to calculate an objective function in order to obtain the first command to apply to my system to get the first output. For this, I use scipy, but the control proposed by this library remains constant regardless of the output values (the output does not follow the reference). Do you have any advice to improve this?
Great video! I appreciate that very much.
thank you for this amazing lecture series. please make more videos about model predictor controll.
Great video, great as always :)
However, I am missing the important point, that *if* we need to take constraints into account, we can't use an LQR since it controls an unconstrained system to a zero state, this is why we need to implement a MPC.
Thanks again for your videos!
great video steve, thanks!
Two questions:
- Why do we bother optimizing over the entire horizon when only the t+1 step is necessary? Unless the optimization solution is generated starting from the end of the horizon backwards.
- Does it often happen nowadays to have real-time systems running their optimizations over the air on a powerful backend (cloud),. What types of systems are suitable given the network latencies of doing so?
1: You need to know how the system will behave given different inputs over a longer time than just t+1 to optimize t+1.
2: Most systems need faster response times than they could provide, and in cases where the optimization takes a long time, there are few benefits to having the optimization off-prem.
Thanks for the lecture professor. I have one question could you please clarify? around 2:45, I thought the first control at k+1 we locked in should be applied to system at k+1, not at k in the above plot -- because at initial time k, Xk is given to calculate (by optimization) the controls of Uk(not useful), Uk+1, Uk+2, etc to minimize the objective within prediction horizon, but the optimization takes time, it's only practically to apply Uk+1 to system at k+1, not at k. Similarly, at k+1, Xk+1 is known, and Uk+1(calculated at k) is applied to the system, and Uk+2 is to be calculated. This process repeats.
Thank you for the helpful explanation. But I have two questions:
1) Let's say I'm controlling the speed of a motor, which is running a variable load. In the model, the load is considered a disturbance and is not included in the differential equations. So, if for example, the load is suddenly removed from the motor, the speed will increase quickly. How can this optimization compensate for such disturbances if they are not included in the models. PI controllers can compensate that because their outputs are dependent on the error signal.
2) How is this optimization carried out?
Great questions. Answering in reverse order: 2) there are tons of approaches to this optimization, and I can't begin to list them here. But there are several good open source software packages out there. You can check out github.com/eurika-kaiser/SINDY-MPC for one approach. 1) This is the beauty of MPC. You are constantly taking a measurement of where the system actually is, so once it gets close enough to the set point, the MPC controller will stop pushing in this direction. I would encourage trying to code this example up, as it would be very relevant.
@@Eigensteve, I am just a beginner here but if MPC uses LQR as its optimizer, then that does contain the state term which represents errors. So, just as you can make system response to errors faster or slower depending on the integral gain, can't you also do that by changing matrix Q?
Great explanation
Nicely explained. Inspirirng, thankyou
Liked the lecture. Thank you sir.
Most welcome!
Great explanation! Thank you!
isn't a subtraction block missing in your diagram that takes the setpoint - control input and gives it to the plant?
Great lecture.
I am wondering if you are using the same computation device to measure the signal, as well as generate the optimal best path, then could there be a tradeoff for re-calculating the optimal trajectory at every step? Or for too many steps ahead of time at?
Steve, thank you for the great lecture series. You'll be happy to know that people are still sitting down and watching the whole series.
I've seen you answer relatively recent comments in some other video, so if you see this, since you mention a number of times where you would make your students prove an equation or another, I was wondering at what level of learning you'd place the material in this bootcamp? I'm a PhD candidate in Wind Engineering so I have pretty much no background in control engineering, and it was easy enough to follow, but I know that this would have been incomprehensible for me just a few years ago. For software or mechanical engineers, would this material be covered in undergrad? Or are these graduate-level concepts?
For what its worth I'm studying my robotics masters and half of this stuff has been things I did in my batchelor half of it has been new to me.
This remind me about adaptive control. Of course for adaptive gains.
It does kind of feel like that, since the algorithm can adjust its behavior. But having a model in MPC makes it very powerful and often more responsive.
@@Eigensteve I see, I havent learnt MPC before, but my first impression is just something like adaptive control to me.
Can we apply the MPC for a model to predict its parameter online , not for control system but just to identify the non linear model .
Thank you for your videos! Are there any stability guarantees when using MPC on nonlinear systems?
Could you please also upload a detailed video lecture on MPC?
Your Control Boot Camp lectures are very useful in revising and honing the fundamental concepts in Control Systems Engineering.
I would like to make a more detailed video on the actual optimization... on the list.
@@Eigensteve Thank you!
@@Eigensteve dear Sir, waiting for this video.
@Steve Brunton Suppose we have a nonlinear dynamics and obtained a linear representation by Koopman operator. Is it ok to apply to Koopman linear representation? If it's ok , which is more reliable? linearize on equilibrium point or koopman?
Can you use information about the deviations of predicted output compared to eventually measured output to improve the model over time to reduce disturbances (mainly caused by model inaccuracies)?
Why we apply only the first control input proposed by the optimizer istead of using all the controls.
You have nicely explained a complex topic.
I've designed an MPC controller for a power converter. But don't know how to tune the controller. Because the variation in the load affects it's performance. Could you please suggest something regarding this?
Hi Sir! Thank you for this valuable splendid axplanation. Could there be an example from you or are there any sources to look for impelementing?
Is MPC a feedback control?
Great video Steve! Does anyone have a suggestion for creating a system model to use with MPC? I have a large amount of historical data for a mechanical vapor recompression system.
what I dont understand is why one need output constraints especially soft ones since we have anyways the set value?
Learning Neural Weights + update Inference on the fly. I wonder if Tesla is already doing this for updating their Autopilot with edge cases?!
great video!
Thanks!
Dear professor, I am working on fuel cell control and optimization, I am interested in MPC , how to start ? Moreover if I simulate a fuel cell and extract input output data for MPC will be a valid case ? Mostly MPC needs experimental data . Little confused . Need your kind help
Does the optimal control signal have to be a step function?
Dear Dr Steve Brunton,,why didn't you explain this superb explanation (by you ) on MATLAB?
Thank you for this introduction! I hope there will be more videos on MPC in the future. What book do you recommend for MPC beginners?
Thanks -- I will probably do a video on "Deep MPC" (MPC with deep learning) soon. To start, I would actually recommend just downloading some code and playing around. Eurika Kaiser has a nice Github repo: github.com/eurika-kaiser/SINDY-MPC
@@Eigensteve Thank you again Professor. Can't wait for the Deep MPC video !
Great!
I want to ask,
why do I need to calculate the horizon,
if I only ever need the first point of that horizon ?
what action will the mpc take when reached setpoint
hi... can I know what is the difference between model based control and model predictive control?? and is the model-based is one approach like Zigler and Niglust??
Thank you for the amazing video on this topic.
I am interested in adding stochasticity to nonlinear MPC, can you please refer me to some references on this topic?
Hey Steve, I have a doubt about the process. A horizon is chosen, a minimization is made similar to the LQR of a time t_0 to a time t_f, and a value of K is calculated. This K, with u = -Kx, is applied and the state is calculated. In the next iteration, the new information is added to the model, so that the new K is calculated, which may be different from the previous one. Again, this u = -Kx control law is implemented. Is that the logic of the process? In general, is it like to use the LQR for each iteration? So that, if nothing in the model changes, would we have an equal K in each iteration and the process would be nothing more than an LQR?
Good question minha consagrada
@@lucianomaia9460 fiquei sem resposta hehe
Hello professor! Could you explain invariant sets and maximal control invariant sets? I'm having trouble grasping the concept.
sir which paper you have referenced in this?
I don't understand. Isn't k equal to time t? What does k+1 imply ? Is it the shift of the time horizon?
Is MPC a numerical command?
Thank a lot for this video. Really easy to understand and looking forward to future videos. Would you be making any videos on deep learning for MPCs or differentiable MPCs?
Sir can you please make more lectures of model predictive control as there are few lectures available and it's a new topic for research. please we need good content with matlab examples
thanks
I love the backward-writing skills!
Probably looking into a mirror blackboard
I think they mirror the video after recording it.
the "set goal" unfortunately IS to have "control" over Time. that's where our problems begin. but denying it won't change the equation.
Thanks for this video ! Could you perhaps do a video of a matlab example of application of MPC on a linear model of system identification ?
In practice, we usually only have data measurement of a system so both A and B are unknown in our model. Would be great to do a full example considering this.
Thanks for your work !
Great suggestion! Check out this code by Eurika Kaiser: github.com/eurika-kaiser/SINDY-MPC
has anyone noticed that steve draws mirror images of everything he's explaining or am i wrong? just curious because i think its amazing !!
No, when the video is being recorded, he writes it the "right way". Then he mirrors the video.
I really had to think about it for a minute lol
Where is the continuation of this video?
okay now where do you use it
very good video! thanks!!!
bruh. How do I like this video twice? For anyone who didn't realize, he's writing backwards.
All these comments and none of them mention how well he writes backwards.
Maybe the video is flipped?
Is this guy writing backwards??
Shrill sounds when you write
get some new pens
nice backward writing skills. MATLAB had a video on this and it is more detailed.