Model Predictive Control

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 118

  • @danielhoven570
    @danielhoven570 4 роки тому +80

    Your lectures on dynamic systems, advanced solution techniques for PDE's, and control theory have been a goldmine for me, I discovered your channel 3 days ago and have watched in excess of 12 hours of your content since then. I'm a junior, and my goal this summer is to move the eigenvalues of my dynamic systems competence matrix to the Right half plane in SU3

  • @evanparshall1323
    @evanparshall1323 3 роки тому +10

    Wow Steve. Your channel is incredible. I am a ME student at Purdue, and I love learning these kinds of things. I have learned so much from your channel! Thank you!

  • @niveditajha303
    @niveditajha303 4 роки тому +22

    Thank you for this video. Made a complex topic really easy to understand.

  • @OlayoReynaud
    @OlayoReynaud Рік тому

    Thank you so much. I have been wondering what it was for a while. In 10 minutes you explained it very well. Its much easier to understand the the basic idea behind the concepts concepts from your videos than from papers/books.

  • @LK-pw7fm
    @LK-pw7fm 8 місяців тому +1

    I'll think I'm gonna re-watch this playlist in a few weeks. So many insights and so great to follow. Thank you a hundred times.

  • @yasir9909
    @yasir9909 4 роки тому +12

    You have presented a good and to-the-point introduction of MPC...

  • @makita3680
    @makita3680 3 роки тому

    This is the first lecture on MPC that I have seen that actually made sense to me and allowed me to understand what MPC is. The previous ones I attended were a soup of symbols and jargon with no meaning to me. Thank you so much.

    • @andresmiguelez9998
      @andresmiguelez9998 2 роки тому

      Those lectures are the epitome of math teachers teaching engineers

  • @ahmedkamiss7643
    @ahmedkamiss7643 3 роки тому +1

    Finally I made it to the last lecture of this series! After some revision I will start the Data-Driven Dynamical Systems with Machine Learning series. Thank you so much Steve!

  • @jamesjakeway9839
    @jamesjakeway9839 4 роки тому +6

    You are writing backwards, reading it backwards, and still explaining things clearly... well done and thank you.

    • @diegoosorio3491
      @diegoosorio3491 3 роки тому +2

      I wonder how he does that... I think the video might be inverted but...

    • @xandermasotto7541
      @xandermasotto7541 3 роки тому +6

      It's inverted and he's left handed

    • @Nike24990
      @Nike24990 3 роки тому +2

      Once a video finished , I thinks it's just flipped and posted!

    • @DWALLE1
      @DWALLE1 5 місяців тому +1

      Yh he's left handed and the video is flipped. But really cool, was looking this up. Apparently the model is based upon how the brain makes decisions. Heavy.

    • @hydropage2855
      @hydropage2855 3 місяці тому

      You’re kind of stupid aren’t you

  • @PeekPost
    @PeekPost 3 роки тому

    best video on MPC period.

  • @seungnamkim8514
    @seungnamkim8514 4 роки тому

    Simple modeling , but the point what we should understand this complex environements.
    You are brave . thanks a lot.

  • @Nike24990
    @Nike24990 3 роки тому +1

    I have been seen your lecture since I admitted in my master course of autonomous vehicles engineering. After a some time ,wish to do PhD under you!

  • @shivashanker7750
    @shivashanker7750 3 роки тому +1

    I work in MPC for autonomous cars...
    Things will always be easy and smooth in simulation...
    Closed loop on real time platform is where i m facing the heat....😬
    That is why i love control systems ..✌️

    • @hamidalshareefi1622
      @hamidalshareefi1622 3 роки тому

      Hello, did you work with MPC in real-time?

    • @shivashanker7750
      @shivashanker7750 3 роки тому

      @@hamidalshareefi1622 yes ... we have implemented MPC technique for a use case in autonomous cars on microcontroller (real time target) ... algorithm is very expensive in terms of computation as we have to run it every 20ms ...

  • @testxy5555
    @testxy5555 4 роки тому +3

    This reminds me of some reinforcement learning ideas related to adjusting input in real-time. interesting!

  • @Ramakrishna-pw2gz
    @Ramakrishna-pw2gz 3 роки тому +3

    Brilliantly explained!!.
    and I'm looking forward to your lecture on Markov decision process.
    I'm working on implementing a hybrid controller by combining the benefits of both MPC and Markov Decision Process.
    I hope this hybrid controller would be more efficient in terms of computational time!!

  • @KhalidA-b2k
    @KhalidA-b2k Рік тому

    So, 5 years ago this was your last video about MPC under "Control Bootcamp" series. So unfortunately, MPC stopped here with Professor Steve! Anyway, thank you for the clear explanation.

  • @Bikas-s3t
    @Bikas-s3t Місяць тому

    Sir, your lectures are amazing. Kindly make video on event based event triggered pid control

  • @nizamuddinkhalid2956
    @nizamuddinkhalid2956 4 роки тому +5

    Lateral inverted writing skill. Wow!

    • @jianzhegu2260
      @jianzhegu2260 3 роки тому +1

      I guess the video is inverted too.

    • @senceryazici
      @senceryazici 3 роки тому +1

      When he writes normally on the glass screen, it looks normal to him, but inverted to camera, then he inverts the video laterally so it looks normal to us

  • @marcuss.6273
    @marcuss.6273 4 роки тому

    Thank you! My prof can't explain it as well as you. Saving me a part of my exam on friday :D

  • @nathanielwenner7041
    @nathanielwenner7041 2 роки тому

    what a fantastic explanation! Thanks

  • @lazlopaul7764
    @lazlopaul7764 4 роки тому +1

    Really well explained!

  • @amel3778
    @amel3778 5 місяців тому +1

    Thank you for sharing. However, I have a question, please. I am currently implementing an MPC to control the temperature inside a room. To model the system, I used a neural network that takes as input a window of data (disturbance_w, control_w, output_w) to predict the output over a prediction horizon. Then, I use these predictions to calculate an objective function in order to obtain the first command to apply to my system to get the first output. For this, I use scipy, but the control proposed by this library remains constant regardless of the output values (the output does not follow the reference). Do you have any advice to improve this?

  • @rafaelsantosarnedomendoza958
    @rafaelsantosarnedomendoza958 2 роки тому

    Great video! I appreciate that very much.

  • @kishorsahu9093
    @kishorsahu9093 Рік тому

    thank you for this amazing lecture series. please make more videos about model predictor controll.

  • @aakash_santoor
    @aakash_santoor Рік тому

    Great video, great as always :)
    However, I am missing the important point, that *if* we need to take constraints into account, we can't use an LQR since it controls an unconstrained system to a zero state, this is why we need to implement a MPC.
    Thanks again for your videos!

  • @Frosty-dz1ul
    @Frosty-dz1ul 3 роки тому

    great video steve, thanks!

  • @lilbaejing7185
    @lilbaejing7185 2 роки тому +2

    Two questions:
    - Why do we bother optimizing over the entire horizon when only the t+1 step is necessary? Unless the optimization solution is generated starting from the end of the horizon backwards.
    - Does it often happen nowadays to have real-time systems running their optimizations over the air on a powerful backend (cloud),. What types of systems are suitable given the network latencies of doing so?

    • @stefnirk
      @stefnirk Місяць тому

      1: You need to know how the system will behave given different inputs over a longer time than just t+1 to optimize t+1.
      2: Most systems need faster response times than they could provide, and in cases where the optimization takes a long time, there are few benefits to having the optimization off-prem.

  • @wenleibai5321
    @wenleibai5321 2 роки тому

    Thanks for the lecture professor. I have one question could you please clarify? around 2:45, I thought the first control at k+1 we locked in should be applied to system at k+1, not at k in the above plot -- because at initial time k, Xk is given to calculate (by optimization) the controls of Uk(not useful), Uk+1, Uk+2, etc to minimize the objective within prediction horizon, but the optimization takes time, it's only practically to apply Uk+1 to system at k+1, not at k. Similarly, at k+1, Xk+1 is known, and Uk+1(calculated at k) is applied to the system, and Uk+2 is to be calculated. This process repeats.

  • @ahmedkotb3912
    @ahmedkotb3912 4 роки тому +4

    Thank you for the helpful explanation. But I have two questions:
    1) Let's say I'm controlling the speed of a motor, which is running a variable load. In the model, the load is considered a disturbance and is not included in the differential equations. So, if for example, the load is suddenly removed from the motor, the speed will increase quickly. How can this optimization compensate for such disturbances if they are not included in the models. PI controllers can compensate that because their outputs are dependent on the error signal.
    2) How is this optimization carried out?

    • @Eigensteve
      @Eigensteve  4 роки тому +6

      Great questions. Answering in reverse order: 2) there are tons of approaches to this optimization, and I can't begin to list them here. But there are several good open source software packages out there. You can check out github.com/eurika-kaiser/SINDY-MPC for one approach. 1) This is the beauty of MPC. You are constantly taking a measurement of where the system actually is, so once it gets close enough to the set point, the MPC controller will stop pushing in this direction. I would encourage trying to code this example up, as it would be very relevant.

    • @hithere8541
      @hithere8541 4 роки тому

      @@Eigensteve, I am just a beginner here but if MPC uses LQR as its optimizer, then that does contain the state term which represents errors. So, just as you can make system response to errors faster or slower depending on the integral gain, can't you also do that by changing matrix Q?

  • @raspier8809
    @raspier8809 2 роки тому

    Great explanation

  • @ritugoel2187
    @ritugoel2187 2 роки тому

    Nicely explained. Inspirirng, thankyou

  • @extendedclassroom5795
    @extendedclassroom5795 4 роки тому +1

    Liked the lecture. Thank you sir.

  • @ShaunStruwig
    @ShaunStruwig 4 роки тому

    Great explanation! Thank you!

  • @standard_output
    @standard_output 5 місяців тому

    isn't a subtraction block missing in your diagram that takes the setpoint - control input and gives it to the plant?

  • @josephdaquila2479
    @josephdaquila2479 10 місяців тому

    Great lecture.
    I am wondering if you are using the same computation device to measure the signal, as well as generate the optimal best path, then could there be a tradeoff for re-calculating the optimal trajectory at every step? Or for too many steps ahead of time at?

  • @elalonde11
    @elalonde11 3 роки тому +1

    Steve, thank you for the great lecture series. You'll be happy to know that people are still sitting down and watching the whole series.
    I've seen you answer relatively recent comments in some other video, so if you see this, since you mention a number of times where you would make your students prove an equation or another, I was wondering at what level of learning you'd place the material in this bootcamp? I'm a PhD candidate in Wind Engineering so I have pretty much no background in control engineering, and it was easy enough to follow, but I know that this would have been incomprehensible for me just a few years ago. For software or mechanical engineers, would this material be covered in undergrad? Or are these graduate-level concepts?

    • @RugnirSvenstarr
      @RugnirSvenstarr 3 роки тому

      For what its worth I'm studying my robotics masters and half of this stuff has been things I did in my batchelor half of it has been new to me.

  • @gondala
    @gondala 4 роки тому +1

    This remind me about adaptive control. Of course for adaptive gains.

    • @Eigensteve
      @Eigensteve  4 роки тому +1

      It does kind of feel like that, since the algorithm can adjust its behavior. But having a model in MPC makes it very powerful and often more responsive.

    • @gondala
      @gondala 4 роки тому

      @@Eigensteve I see, I havent learnt MPC before, but my first impression is just something like adaptive control to me.

  • @yasserhaddou5393
    @yasserhaddou5393 7 місяців тому

    Can we apply the MPC for a model to predict its parameter online , not for control system but just to identify the non linear model .

  • @cameronurban3653
    @cameronurban3653 3 роки тому +2

    Thank you for your videos! Are there any stability guarantees when using MPC on nonlinear systems?

  • @yasir9909
    @yasir9909 4 роки тому +1

    Could you please also upload a detailed video lecture on MPC?
    Your Control Boot Camp lectures are very useful in revising and honing the fundamental concepts in Control Systems Engineering.

    • @Eigensteve
      @Eigensteve  4 роки тому +3

      I would like to make a more detailed video on the actual optimization... on the list.

    • @yasir9909
      @yasir9909 4 роки тому

      @@Eigensteve Thank you!

    • @Qaidi_804
      @Qaidi_804 Рік тому

      @@Eigensteve dear Sir, waiting for this video.

  • @miladyazdanpanah3895
    @miladyazdanpanah3895 3 роки тому

    @Steve Brunton Suppose we have a nonlinear dynamics and obtained a linear representation by Koopman operator. Is it ok to apply to Koopman linear representation? If it's ok , which is more reliable? linearize on equilibrium point or koopman?

  • @barbonson_richards
    @barbonson_richards 10 місяців тому

    Can you use information about the deviations of predicted output compared to eventually measured output to improve the model over time to reduce disturbances (mainly caused by model inaccuracies)?

  • @yassineaguelmam8674
    @yassineaguelmam8674 Рік тому

    Why we apply only the first control input proposed by the optimizer istead of using all the controls.

  • @akash10111
    @akash10111 4 роки тому

    You have nicely explained a complex topic.
    I've designed an MPC controller for a power converter. But don't know how to tune the controller. Because the variation in the load affects it's performance. Could you please suggest something regarding this?

  • @akanguven114
    @akanguven114 Рік тому

    Hi Sir! Thank you for this valuable splendid axplanation. Could there be an example from you or are there any sources to look for impelementing?

  • @JamesTJoseph
    @JamesTJoseph 4 роки тому +1

    Is MPC a feedback control?

  • @benlaroche2777
    @benlaroche2777 2 роки тому

    Great video Steve! Does anyone have a suggestion for creating a system model to use with MPC? I have a large amount of historical data for a mechanical vapor recompression system.

  • @benjaminpommer628
    @benjaminpommer628 Рік тому

    what I dont understand is why one need output constraints especially soft ones since we have anyways the set value?

  • @randywelt8210
    @randywelt8210 4 роки тому

    Learning Neural Weights + update Inference on the fly. I wonder if Tesla is already doing this for updating their Autopilot with edge cases?!

  • @how2matlab444
    @how2matlab444 4 роки тому +1

    great video!

  • @STWNoman
    @STWNoman 4 роки тому

    Dear professor, I am working on fuel cell control and optimization, I am interested in MPC , how to start ? Moreover if I simulate a fuel cell and extract input output data for MPC will be a valid case ? Mostly MPC needs experimental data . Little confused . Need your kind help

  • @taiwoiromini6016
    @taiwoiromini6016 3 роки тому

    Does the optimal control signal have to be a step function?

  • @bilalsadiq3495
    @bilalsadiq3495 3 роки тому

    Dear Dr Steve Brunton,,why didn't you explain this superb explanation (by you ) on MATLAB?

  • @r0ckahead
    @r0ckahead 4 роки тому +3

    Thank you for this introduction! I hope there will be more videos on MPC in the future. What book do you recommend for MPC beginners?

    • @Eigensteve
      @Eigensteve  4 роки тому +14

      Thanks -- I will probably do a video on "Deep MPC" (MPC with deep learning) soon. To start, I would actually recommend just downloading some code and playing around. Eurika Kaiser has a nice Github repo: github.com/eurika-kaiser/SINDY-MPC

    • @ahmedkamiss7643
      @ahmedkamiss7643 3 роки тому

      @@Eigensteve Thank you again Professor. Can't wait for the Deep MPC video !

  • @AleeEnt863
    @AleeEnt863 Рік тому

    Great!

  • @strange67x
    @strange67x 2 роки тому

    I want to ask,
    why do I need to calculate the horizon,
    if I only ever need the first point of that horizon ?

  • @koneruneelesh79
    @koneruneelesh79 3 роки тому

    what action will the mpc take when reached setpoint

  • @eng.athraa5707
    @eng.athraa5707 4 роки тому

    hi... can I know what is the difference between model based control and model predictive control?? and is the model-based is one approach like Zigler and Niglust??

  • @dariushbayatpoor7384
    @dariushbayatpoor7384 3 роки тому +1

    Thank you for the amazing video on this topic.
    I am interested in adding stochasticity to nonlinear MPC, can you please refer me to some references on this topic?

  • @shyennepinheiro4547
    @shyennepinheiro4547 4 роки тому

    Hey Steve, I have a doubt about the process. A horizon is chosen, a minimization is made similar to the LQR of a time t_0 to a time t_f, and a value of K is calculated. This K, with u = -Kx, is applied and the state is calculated. In the next iteration, the new information is added to the model, so that the new K is calculated, which may be different from the previous one. Again, this u = -Kx control law is implemented. Is that the logic of the process? In general, is it like to use the LQR for each iteration? So that, if nothing in the model changes, would we have an equal K in each iteration and the process would be nothing more than an LQR?

  • @sharanyashastry8926
    @sharanyashastry8926 Рік тому

    Hello professor! Could you explain invariant sets and maximal control invariant sets? I'm having trouble grasping the concept.

  • @techzila5379
    @techzila5379 3 роки тому

    sir which paper you have referenced in this?

  • @nmana9759
    @nmana9759 4 роки тому

    I don't understand. Isn't k equal to time t? What does k+1 imply ? Is it the shift of the time horizon?

  • @bellilaahmednassim5308
    @bellilaahmednassim5308 4 роки тому

    Is MPC a numerical command?

  • @shashank3165
    @shashank3165 4 роки тому +3

    Thank a lot for this video. Really easy to understand and looking forward to future videos. Would you be making any videos on deep learning for MPCs or differentiable MPCs?

    • @amanthakur4925
      @amanthakur4925 Рік тому

      Sir can you please make more lectures of model predictive control as there are few lectures available and it's a new topic for research. please we need good content with matlab examples

  • @RagingPotato96
    @RagingPotato96 3 роки тому

    thanks

  • @kurious-design
    @kurious-design 4 роки тому +7

    I love the backward-writing skills!

    • @linde_learn3892
      @linde_learn3892 4 роки тому +2

      Probably looking into a mirror blackboard

    • @PaulWintz
      @PaulWintz 3 роки тому

      I think they mirror the video after recording it.

  • @Insider_English
    @Insider_English 9 місяців тому

    the "set goal" unfortunately IS to have "control" over Time. that's where our problems begin. but denying it won't change the equation.

  • @oliviajulia7913
    @oliviajulia7913 3 роки тому +1

    Thanks for this video ! Could you perhaps do a video of a matlab example of application of MPC on a linear model of system identification ?
    In practice, we usually only have data measurement of a system so both A and B are unknown in our model. Would be great to do a full example considering this.
    Thanks for your work !

    • @Eigensteve
      @Eigensteve  3 роки тому +1

      Great suggestion! Check out this code by Eurika Kaiser: github.com/eurika-kaiser/SINDY-MPC

  • @atharvachavan3418
    @atharvachavan3418 4 роки тому +11

    has anyone noticed that steve draws mirror images of everything he's explaining or am i wrong? just curious because i think its amazing !!

    • @philiphamilton2107
      @philiphamilton2107 4 роки тому +11

      No, when the video is being recorded, he writes it the "right way". Then he mirrors the video.

    • @beagle989
      @beagle989 2 роки тому

      I really had to think about it for a minute lol

  • @sapertuz
    @sapertuz 4 роки тому

    Where is the continuation of this video?

  • @v1Broadcaster
    @v1Broadcaster 3 роки тому

    okay now where do you use it

  • @danawen555
    @danawen555 4 роки тому

    very good video! thanks!!!

  • @lemontable6373
    @lemontable6373 3 роки тому

    bruh. How do I like this video twice? For anyone who didn't realize, he's writing backwards.

  • @saulspringmind1013
    @saulspringmind1013 3 роки тому +1

    All these comments and none of them mention how well he writes backwards.

  • @SonnyGeorgeVlogs
    @SonnyGeorgeVlogs 8 місяців тому +1

    Is this guy writing backwards??

  • @nomnamecpp
    @nomnamecpp 4 роки тому

    Shrill sounds when you write

  • @lkd982
    @lkd982 2 роки тому

    get some new pens

  • @hongkitchong71
    @hongkitchong71 4 роки тому

    nice backward writing skills. MATLAB had a video on this and it is more detailed.