MPC and MHE implementation in Matlab using Casadi | Part 1

Поділитися
Вставка
  • Опубліковано 2 лис 2024

КОМЕНТАРІ • 360

  • @RayminL
    @RayminL 18 днів тому +1

    The best MPC tutorial ever.

  • @roboticscon
    @roboticscon 3 місяці тому +1

    You are such an amazing professor , our world needs so many professor like you in control engineering. you are very clear. i am basically loves the professor teaches starting from basic, especially control engineering course is multidisciplinary, with out knowing how and from where we start , teach very advanced concepts even making soul *(no one makes it actually except the owner ) is irrelevant. Thank you. Thank you.

  • @marcyaudrey6608
    @marcyaudrey6608 Рік тому +2

    Dear Professor @Mehrez, thank you very much for this amazing lecture. I have been looking for a real implementation of an MPC problem for a while. This lecture comes to my rescue. Thanks once more Sir and keep it up.

  • @controlresearchlab-kg6fy
    @controlresearchlab-kg6fy Рік тому +1

    Really your lecture is very interesting lecture and your way of presentation is more support to learn new concept. I learned it effectively.

  • @theidealisticman
    @theidealisticman 9 місяців тому +2

    Not the hero we deserve, but the hero we need.

  • @ahmedhassan-jf4wr
    @ahmedhassan-jf4wr Рік тому +2

    Hello Dr.mehrez
    Thank you for sharing this amazing workshop it was so useful

  • @masdikaaliman6425
    @masdikaaliman6425 10 місяців тому +2

    Very excellent tutorials MPC 👌👌👌

  • @atharvachavan3418
    @atharvachavan3418 3 роки тому +2

    Best MPC tutorial out there!!!!

  • @AbdullahRushdi
    @AbdullahRushdi 2 роки тому

    كنت حابب اعرف منين و حسيت بالفخر لما عرفت انك من مصر ... ربنا يحفظك يا دكتور :D و شكرا عالفيديو مفيد جدا

  • @kaspulanwar7082
    @kaspulanwar7082 Рік тому +1

    Nice tutorial sir, Thank you so much...
    Greeting from Indonesia

  • @pratikprajapati8620
    @pratikprajapati8620 3 роки тому +1

    Thanks a lot Literally your videos are one of the great stuff out there on UA-cam

  • @arghyamallick8289
    @arghyamallick8289 3 роки тому +2

    Masterpiece video....Thank you very much sir.

  • @Paulina-su8nk
    @Paulina-su8nk 3 роки тому +1

    Thank you so much for publishing this lecture! It's amazingly well explained.
    Greetings from the UK :)

    • @Paulina-su8nk
      @Paulina-su8nk 3 роки тому +1

      ps. I was taking notes of everything you said, and after one hour of video, I realised you had uploaded the presentation on GitHub... I could've saved a lot of time if I knew it 😅 Anyways, thanks again!!

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      Thank you for your comment. I am glad that the workshop was helpful.

    • @Paulina-su8nk
      @Paulina-su8nk 3 роки тому

      @@MohamedWMehrez Thank you very much! I've seen your tutorial a couple of times, and I think I understood everything you said, but I was wondering how to implement Multiple shooting for DAE problems? I know that CasADi has a daebuilder.. but I was wondering if that's enough for calculation of the derivatives or we have to use Euler anyways? Thank you again.

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      @@Paulina-su8nk thank you for your comment. I haven't tried DAE's with MPC before. Nonetheless, I believe that it should be straightforward to use DAE's for prediction. It's just a matter of adding the algebraic part of the equations as is and using Euler or RK4 for the differential part. Just keep in mind that Euler discretization converts the ODE into algebraic expression. You can also check the CasADi manual/ website for the DAE's integrators they have.

    • @Paulina-su8nk
      @Paulina-su8nk 3 роки тому

      @@MohamedWMehrez Thank you so much for your reply! It's really helpful.
      I am now trying to implement a DAE problem to MPC for my PhD project, so I've been seeking a lot of information in order to implement it successfully. Your tutorials have helped me a lot to understand the basis of MPC, MHE and CasADi. Thank you so much for this!

  • @wahyuadnyana2296
    @wahyuadnyana2296 3 роки тому +1

    Thank you Dr. Mohamed it was really helpful.

  • @welidbenchouche
    @welidbenchouche 5 років тому +1

    Already watching professor. U saved my day

  • @marcyaudrey6608
    @marcyaudrey6608 Рік тому +1

    I am looking forward to implementing the MPC algorithm on the robot's system you have presented; then I will try the algorithm on other systems. Thanks a lot

  • @knighttime19
    @knighttime19 3 роки тому

    Thank you Prof. I just modified your 3rd motivation code to find the center point and radius of a circle for a given points around the circumference, It took only milliseconds !! amazing.

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      great job!

    • @bilalsadiq3495
      @bilalsadiq3495 3 роки тому

      @@MohamedWMehrez Dr Mehrez,superb teaching and explaiing is awesome,I wish i will be your student.
      Dear Dr Mehrez ,would you like to share the code with me as i also have to do work on MPC (hope you don' mind), bilalsadiq1100@gmail.com is my email id..waiting for your kind response.
      Thank you

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому

      @@bilalsadiq3495 Hi, thank you for your comment. The code is linked in the video description.

    • @bilalsadiq3495
      @bilalsadiq3495 3 роки тому

      @@MohamedWMehrez Thank you so much Dr.Mehrez..Would you like to tell me that in MPC (augmentattion model) have you used the integrator embedded ?thanks once again

    • @bilalsadiq3495
      @bilalsadiq3495 3 роки тому

      @@MohamedWMehrez Thak you ,,I also noticed that you didn't use the augmented state space model approach?like in the book Model Predictive Control and MATLAB implementation 'by Liuping wang

  • @mhdhouri2906
    @mhdhouri2906 Рік тому +1

    الله يجعله بميزان حسناتك يا دكتور

  • @muhammadhd8407
    @muhammadhd8407 2 роки тому +1

    شرح عظيم دكتور محمد، شكرا جزيلاً.

  • @yasirnaeem7192
    @yasirnaeem7192 Рік тому +1

    Amazing explanation Sir..

  • @umarniazi8617
    @umarniazi8617 3 роки тому +1

    Great tutorial Mohamed. Thank you very much.

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому

      You are most welcome!

    • @umarniazi8617
      @umarniazi8617 3 роки тому

      @@MohamedWMehrez I have a question. If the rhs is not an analytic function like in your example, but it is a time-varying ode whose parameters are taken from the data. Is it possible to define such a function as we do in ode45 in MATLAB?

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому

      Yes you can. Just pass those values in the parameters vector P. Just be aware of where you are placing them within P.

  • @hanenyaakoubi294
    @hanenyaakoubi294 2 роки тому

    Thank you, Dr. Mohammed, it is very interesting.

  • @fizasayed8396
    @fizasayed8396 3 роки тому +3

    Dear Mohamed
    I don't have words to thank you for this amazing tutorials. this was a turning point for my master thesis, i will definitely cite your work,

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +2

      Hi Fiza, Thank you for your comment. Yes, that's exactly how it works.

    • @fizasayed8396
      @fizasayed8396 3 роки тому

      @@MohamedWMehrez Thanks you for quick reply Mohamed, one more thing is kinda not clear to me, if we are using the first control input and then based on that getting the new state, then what do we do with the state outputs we get from the solver (in case of multiple shooting we get both control inputs and states as output right) . sorry if the question is too naive :P

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому

      @@fizasayed8396 That's a good question. As you said the control is applied to the system. The predicted state is used as an initialization for the next optimization step after being shifted.

  • @douvenw6721
    @douvenw6721 2 роки тому +1

    This lecture about MPC is the best one that I have seen so far. How can I get C++ code from Matlab code of your case? C++ is more general in hardware driving. Thanks very much overall!

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Thank you for your comment. I think you can download the c++ source code/pre-build library of Casadi from the website; I haven't tried this before though. There is also a little bit of details on C++ coding of Casadi in its manual.

    • @douvenw6721
      @douvenw6721 2 роки тому

      @@MohamedWMehrez thanks, I will have a try.

  • @darkhanzholtayev3224
    @darkhanzholtayev3224 4 роки тому +1

    Thank you for your generous sharing the lecture!

  • @Himanshu-xe7ek
    @Himanshu-xe7ek Рік тому +1

    Hello Dr. Mehrez, its a great tutorial. I learned a lot about the implementation process of MPC. I have a few questions for you:
    1) This implementation is linear MPC or a non linear MPC?
    2) How to modify this code to include a trajectory rather that a single waypoint?

    • @MohamedWMehrez
      @MohamedWMehrez  Рік тому +1

      Hi Himanshu, thank you for your comment.1) this is nonlinear mpc as we used a purely nonlinear model for the robot. 2) there is another video in this series that explains how to implement trajectory tracking.

    • @Himanshu-xe7ek
      @Himanshu-xe7ek Рік тому

      @@MohamedWMehrez Thank you for the information. Can you refer or make a tutorial on explicit MPC?

  • @ibrahimseleem
    @ibrahimseleem 2 роки тому

    Thanks a lot for your incredible lectures

  • @chillrelaxingbeatifulmusic5951
    @chillrelaxingbeatifulmusic5951 3 роки тому +2

    Hello Mohamed
    thanks for the great lecture.
    I want to design a MPC with Matlab (Simulink) and Casadi.
    My system is realtime, so I have to do an online optimisation with simulink. Do you have any advice?
    The model is nonlinear.
    Have a nice day .

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +2

      Hi Josh, thank you for your comment. You can actually couple Casadi with simulink. Check casadi's website for more details on that.

  • @ausama84
    @ausama84 3 роки тому +2

    Outstanding!!

  • @sasaiara
    @sasaiara Рік тому +1

    Hello Dr. Mehrez,
    It is indeed a great lecture video. I am a beginner in MPC, therefore, I learnt a lot. However, I have a question regarding the code I got from the github link. My Matlab 2022a can not identify the function of visualization, You used, "Draw_MPC_point_stabilization_v1" command for visualization purpose. My code shows error in this line, can you help me in this regard?

    • @MohamedWMehrez
      @MohamedWMehrez  Рік тому +1

      Hi, thank you for your comment. This function is actually included in the files provided. Just make sure all files are in the same folder.

  • @DDONISBABBURGER
    @DDONISBABBURGER 2 роки тому +2

    Dear Professor. Mohamed,
    Thanks for graceful lectures.
    I've modified your code to fit my dynamic model.
    By the way, I have a question to implement the part of the code.
    If I want to embed the rate constraint additionally (rate of input or rate of state), how can I express these constraints in CASADI?
    I've tried to search the method to add rate constraints during few days, but I've not found the appropriate explanation.
    Can you explain how to implement the rate constraints in CASADI?
    Thanks in advance.

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Hi, Thank you for your comment. By the definition of the rate, you can constrain the change in the state/input every time step instead and add this to the constraint vector g. For example, g = [g, u(k+1)-u(k)].

    • @DDONISBABBURGER
      @DDONISBABBURGER 2 роки тому

      @@MohamedWMehrez Thank you sir.

    • @DDONISBABBURGER
      @DDONISBABBURGER 2 роки тому

      @@MohamedWMehrez I constrained the rate as you told me, but It does not work properly.
      I sent the email to your gmail address. If possible, Could you answer to the e-mail?
      I think you must be busy, sorry for my basic question

  • @runyanglian3747
    @runyanglian3747 Рік тому +1

    Thank you so much Dr. Mohamed for the lecture. It provided great insight into the capabilities of Casadi. I have a question; is it also possible for casadi to solve partial differential equations or optimization involving PDEs? For example the 1D-Heat equation. If it is possible, how can one establish the equations for that. Thanks!

    • @MohamedWMehrez
      @MohamedWMehrez  Рік тому +1

      Hi Runyang, Thank you for your comment. I didn't try Casadi for PDE's before, but if you can convert your PDE into difference equations similar to what we did to the ODE, then you can use Casadi for your problem. You can search for the methods used to solve PDE's and this will help you to implement that.

  • @welidbenchouche
    @welidbenchouche 5 років тому +1

    Hello professor, would you be so kind to put more videos on MPC, your way is so so good, i'm still struggling with the trajectory tracking problem, all the best !

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому +1

      Hi, I uploaded a new video on how to do that.

  • @dnza.86
    @dnza.86 3 роки тому +1

    Hello Mr. Mehrez, first of all thank you very much for the resources. As everyone said below, your work is amazing and so helpful.
    My question is in the nmpc design with multiple shooting and RK4. Before discretization you are defining an "initial condition constraints", which is g = [g; st - P(1:3)] at 1:32:10. Is this constraint belongs to multiple shooting step or it is coming from another condition? I am now working with an Casadi interface named MPCTools. The multiple shooting is the default discretization and constraints are also defined by the interface designer. I am not sure if the "initial condition constraints" should be defined by me separately or they are already defined under multiple shooting source code? Thank you very much in advance for the valuable comments.

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      Hi Dnz, Thank you for your comment. Yes, the constraint comes from the multiple shooting formulation. I don't think that you have to redefine it in MPCtools if you are already using the multiple shooting option. All the best!

  • @engineeringdocu8068
    @engineeringdocu8068 3 роки тому +1

    Outstanding MPC series.Thank you professor.
    I've tried the implementation,but this exception arises:
    if the start position angle is at 170 degrees and the goal destination angle is at -135 for example,
    Casadi finds that it's more optimal to reduce the angle to 0 then reach the negative value of -135: turning Clockwise.
    it's like Casadi problem as it is shown here doesn't know that it is optimal to raise the angle past 180 degrees to reach immediately negative values
    such us -180 then try to reach -135 in a CCW motion.
    How could we make Casadi problem recognize the modulo PI if That's the issue I guess
    Thank you again Professor.Top Notch explanation .

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      Hi, thank you for your comment. It's not a CasADi problem; it's the problem of the model used, i.e the angular change is an integration of the rotational speed. So the angle keeps increasing or decreasing and will not have a change like -180 => 179. You need to change the model to use a rotation matrix to represent the rotation instead of a single equation for the angular position.
      Check out this paper
      www.sciencedirect.com/science/article/pii/S2405896320334182

    • @engineeringdocu8068
      @engineeringdocu8068 3 роки тому

      ​@@MohamedWMehrez That makes sense, I'll implement it and see how the vehicle behaves .Thank you for the fast feedback Professor.

  • @johanclaassen9
    @johanclaassen9 4 роки тому +1

    Thank you very much, Professor, for this educational video on MPC. If Prof would not mind, can Prof maybe recommend a textbook for additional learning of MPC?

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      Thank you Johan for your comment. Here is a very good book on MPC
      sites.engineering.ucsb.edu/~jbraw/mpc/

  •  3 роки тому +1

    pure gold!

  • @crossbow6251
    @crossbow6251 2 роки тому +1

    Hello, Dr. Mehrez thank you for the detailed explanations. I used your videos and also the nonlinear MPC example uploaded by Joris Gillis to learn how to implement the basic algorithm in Matlab, in this example Opti stack is used. Only difference i realized between your implementations is that you are giving the x0 value (actually the u0, the optimization variable) to the solver, whereas in the other example only the p value, the initial states, are given to the solver. So as far as i understand, you are using the already calculated u value as a starting value in the next iteration of MPC loop, after trimming the first value. Is that a necessary step (i mean part of the most basic MPC algortihm), or is it implemented to increase the performance ? Thank you

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Hi Thank you for your comment. This initialization shortens the computation time.

  • @BilalMubdir
    @BilalMubdir 2 роки тому +1

    Well done doctor. If you don't mine I have a question related to the RHS. I tried to solve NL equations but with decison variables (inputs) sequared (to the power 2) but it doesn't work with me. I tried another equtions without sequres and worked perfectly. Do you have any idea about the limitation of CasADi especially the ipopt solver that you already used in your great presentation?

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому +1

      Thank you for your comment. There should not be such a limitation on the nonlinearity of your model.

    • @BilalMubdir
      @BilalMubdir 2 роки тому

      @@MohamedWMehrez do you mean that my nonlinearity is more than the capability of CasADi?

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому +1

      I meant that the nonlinearity in the model should NOT be a problem.

    • @BilalMubdir
      @BilalMubdir 2 роки тому

      @@MohamedWMehrez but in practice it is really a big problem. I also tried something else, for an input, I powered to 2 and then I took the squire root for it so as a result it should be just the input itself because both effects canceling each other and it doesn’t work. Noting that without making this it works perfectly. Thus, I thought that involving any power or squire root leads to not reaching to an optimum solution. In fact the result always about e-5, whereas it should be around thousands!!

  • @souvikdas7773
    @souvikdas7773 4 роки тому +2

    Thank you for sharing this. I have a question. While the speed of computation is one of the issues (in which Casadi has an edge over Matlab in-built solvers), I am more interested in the stability and sensitivity to perturbation. How much stable it is than the built-in solvers in Matlab?

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      Hi Souvik, all the simulations I did with Matlab builtin function, e.g. fmincon(), performed the same under Casadi. That's why I don't use matlab builtin function for optimization anymore.

  • @mojok97
    @mojok97 2 роки тому +1

    Thank you for this great introduction into MPC. You said that MPC could also be used for Path Tracking, when no exact timed trajectory is given. Do you have any more resources/paper on this? In my project at the moment i am creating just a path and want my MPC to track it. Thank you very much in advance :)

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому +1

      Thank you for your comment. check out this paper; it shows you how to formulate a path following problem and solve it using MPC.
      www.sciencedirect.com/science/article/pii/S2405896317313733

    • @mojok97
      @mojok97 2 роки тому

      Awesome! Thank you very much.
      Kind regards from Germany

  • @peacefulmusic5426
    @peacefulmusic5426 3 роки тому +1

    Thats ia very useful lecture on multiple shooting. thankyou so much it clarifies a lot of things. Just a question, in the shift function when we update states. what are the dimensions of that vecto? are states updated wit just one horizon or for five?

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      Just one step. This is because the shift function replaces the physical model in the simulation.

    • @peacefulmusic5426
      @peacefulmusic5426 3 роки тому +1

      @@MohamedWMehrez ohh. thankyou so much. i was completely misunderstanding that part.

  • @batuozmeteler8405
    @batuozmeteler8405 4 роки тому +1

    It is really difficult to find such well-explained lectures online, thank you very much. Also, I noticed that you answered questions on this comment section, which is quite beneficial. I would like to ask one if you don't mind:
    Currently, I try to use CasADi to transform an OCP (MPC Problem) into an NLP problem. I feel like I have succeeded in the coding part, however I don't know how to generate a C code to embed it into an ECU. I have seen your suggestion about Acados but the main problem is that my constraints ('g' vector) vector changes at every time step due to a parameter changing in the RHS equation. How can I generate code of an NLP solver, if my NLP problem formulation structure updates at every time-step due to g vector changing? I hope you and your loved ones are safe during this coronavirus outbreak. Your answer will be greatly appreciated professor, thank you in advance.

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      You are welcome! You can actually fix the number of the constraints to the maximum number you can possibly have. Then, you multiply each constraint by an extra parameter whose value is either 0 or 1 depending on whether or not this constraint will be active at a given MPC iteration. All the best!

    • @batuozmeteler8405
      @batuozmeteler8405 4 роки тому +1

      ​@@MohamedWMehrez Thank you for your reply Dr. Mehrez. I apologize but I would like to verify my understanding of your suggestion. I am stuck trying to figure out how the code generation works, if you don't mind I want to ask some questions:
      I am using ipopt solver as you have shown in the examples to generate a code for the s-function like mentioned in this link: web.casadi.org/blog/mpc-simulink2/ but I get errors. Do you think its related to my default chosen compiler: Microsoft Windows SDK 7.1 (C)? Should I use MinGW for code generation?
      Also, is it better to use optistack in CasADi for MPC problems with osqp solver for code generation that will be used in S-functions?
      Lastly, in your explanation, do you mean pre-computing the g vector throughout all the steps of the simulation and using the appropriate ones when that exact time-step of the simulation arrives? My problem is a trajectory tracking problem where I get different reference values at each time step of the simulation. I believe CasADi only supports code generation of functions with symbolic inputs. Regarding this, assuming my code generation works, how can I make the S-function recognize the simulation time step so that it can choose the right g values (from the pre-computed vector) considering that it can't get any other inputs than symbols?
      Again, I apologize for asking these many questions. I hope you can provide me suggestions. Thank you for your attention, doctor.

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      @@batuozmeteler8405 hello, no problems! Concerning the first part of your question, Casadi doesn't support code generation for ipopt so far. You can see how code generation works and the available supported solvers here; ua-cam.com/video/JI-AyLv68Xs/v-deo.html
      For the second part of your question, did you check my tutorial on trajectory tracking? There I am showing how to pass a time varying reference to Casadi.

    • @batuozmeteler8405
      @batuozmeteler8405 4 роки тому

      @@MohamedWMehrez Thank you again Dr. Mehrez, everything is much clearer now.

  • @TheCrowl33t
    @TheCrowl33t Рік тому +1

    Hi! I'm trying to implement a neural network based mpc (feedforward in this case) using multiple shooting method, and as you may already know such artificial network must be fed by some number of previous states to predict a correct future state. In my case, I'm using 4 previous state samples to predict the next state as in x(k+1)=NN(x(k),x(k-1),x(k-2),x(k-3),u(k)) but then I encounter the following issue that I can't wrap my head around (I've already implemented both linear and nonlinear models in casadi mpcs in simulink and they worked like a charm). Code looks something like this:
    % "Lift" initial conditions
    X0 = MX.sym('X0', 4);
    w = {w{:}, X0};
    lbw = [lbw; 0; 0; 0; 0]; %
    ubw = [ubw; 0; 0; 0; 0]; %
    w0 = [w0; 0; 0; 0; 0]; %
    Im lifting 4 initial states for prediction (x0(k),x0(k-1),x0(k-2),x0(k-3)
    % Formulate the NLP
    Xk = X0;
    for k=0:N-1
    % New NLP variable for the control
    Uk = MX.sym(['U_' num2str(k)]);
    w = {w{:}, Uk};
    lbw = [lbw; -3];
    ubw = [ubw; 3];
    w0 = [w0; 0];

    % Integrate till the end of the interval
    Fk = F('x0', Xk, 'p', Uk); k-1, k-1 -> k-2 etc, and add 3 more variables in vector Xk so I could formulate the vector g? How would I put constraints on those states?
    I know that this code might be miles off of what I need to do but please help if you can.
    Thank you in advance. You're doing the gods work with these tutorials!

    • @MohamedWMehrez
      @MohamedWMehrez  Рік тому +1

      Hi, You can extend the initial condition part of the P vector to the size of the 4 initial states combined (or reshaped into a single vector). Then, in your optimization problem setup, you can break P down to your 4 initial states and use them in your NN. You can start with the same value for the 4 initial state, then you update them as your simulation runs.

  • @ahmedwael3564
    @ahmedwael3564 2 роки тому +1

    thank you prof. Mohamed, I have a question, how can I implement such a code on the real robot ?

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Hi Ahmed, you just need to know how to interface with your robot to send the velocity commands and also get the robot's pose x0 which you will use as your feedback.

  • @hasinabrar3263
    @hasinabrar3263 Рік тому +1

    A fantastic tutorial. i am a bit confused though on multiple shooting like, why is it more accurate than single shooting if both methods include the dynamics of the system in the context of ocp?

    • @MohamedWMehrez
      @MohamedWMehrez  Рік тому +1

      Thank you for your comment. It is because how the dynamics are represented in the OCP. In the single-shooting method x and u are connected via forward propagation of the system model in the cost function (over the prediction horizon) while in multiple-shooting x and u are connected via constraints. In the first case, nonlinearities are accumulated over the forward propagation and that is not happening in the second method.

  •  4 роки тому +2

    Hello Professor Mehrez!
    First I would like to thank you for this amazing and well explained video. I have to confess I'm not a student of yours, I found out about your work through searching on the internet. I have to say, it's impressive.
    I've understood almost everything, exept the the Multiple shooting system model explanation at 1:26:30. I get the concept of how it works (2 opt variables), I just don't understand the model. Why does the actual state and the predicted states difference have to be 0. Can you please explain to me in another way?
    Thank you for your time.

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      Thank you Adrian for your comment. I am glad that the workshop was helpful. In Multiple shooting, we treat both X and U as optimization variables in contrast to only U in the single shooting case. Remember that, in the single shooting case we linked the values of the prediction matrix X with the optimization variable U via the recursive substitution using the system model starting from the initial condition of x, which was empeded in the parameters vector P. Since in multiple shooting X is already an optimization variable, there is no need to do such a recursive substitution, but rather we impose an equality constraint at each prediction step (shooting step) which links a state in X and a control in U and an immediate successor of that state in X. Therefore, we add the constraint
      st_next-st_next_euler = 0
      This constraint ensures that the relation between each state in the optimization variable X is correctly linked with its neighbours and the corresponding control action; and this relation is simply the system model.

    •  4 роки тому +1

      @@MohamedWMehrez Looks like the thank you didn't get through 2 days ago, So thank you so much for the explanation. May I refer on my BSC thesis on your work?

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      @ of course! please also cite any of my related articles here scholar.google.com/citations?user=aRNeH4QAAAAJ&hl=en

  • @niushatabrizi9473
    @niushatabrizi9473 Рік тому +1

    Hello Dr.mehrez
    Thank you for sharing this amazing workshop it was so useful.
    I have a couple of questions though it.
    Can we use casADi for controlling articulated vehicles too? If it is possible to use can we use your code with changing the number of States and (rhs)?
    And the last question is what should I change in the shift function? (I just got an error in the shift function)
    It would be really helpful if you can support me on this.

    • @MohamedWMehrez
      @MohamedWMehrez  Рік тому

      Hi, thank you for your comment. Yes, you can use the same code for a different system as you described. Make sure that you modify the code to consider the new system correctly at all parts of the code including the shift function.

  • @sokhengdin1657
    @sokhengdin1657 2 роки тому +1

    Your lecture videos are really great and useful for me to practice nmpc using Casadi, but I am confused that how do you put closed-loop using casadi. I have tried to find its classes , but I don’t see where can I use closed-loop. Can you tell me where is it or how to implement closed-loop in Casadi. But anyway thank you so much for your videos.

  • @ilkertalatcankutlucan3257
    @ilkertalatcankutlucan3257 3 роки тому +1

    Hello Dr. Mehrez, I've had a single semester graduate level course on Optimization, Can you recommend me a book on Optimization, particularly on NLP with Non-Linear Constarints, Also I'd appreciate a good book on MPC.
    Great lecture by the way, Have a nice weekend !

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому

      Hi, thank you for your comments. Check Prof. Moritz Diehl's website for courses on optimization and books on mpc.
      www.syscop.de/people/moritz-diehl

  • @abishekchandrasekhar7
    @abishekchandrasekhar7 4 роки тому +2

    Hello! Thank you for the tutorial! I just have a question!
    What if I want the sample time for numerical solution to be different from the control sample time?
    For instance, I want the system dynamics to be sampled at 1 sec, but I want the control to be a zero order hold for 5 seconds

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      You're welcome. You can set the simulation loop to 1 second while sampling the mpc formulation at 5 seconds.

  • @TheIRAmike
    @TheIRAmike 3 роки тому

    Hello Dr. Mehrez, this lecture playlist was very helpful, as I will probably use Casadi for my thesis due to lack of MPC Toolbox license.
    Do you know if Casadi supports the use of symbolics in the upper/lower bound definition of the constraints? Specifically altering those constraints in between solver iterations, based on parts of the predicted state.
    Additionally, is there a straightforward way for adding RCS-like thrust control into the model dynamics (which are usually continuous wrt the control inputs), so that their discrete firing times can be used as the decision variables (subject to some type of symbolic constraint [ t_2 > t_1 + t_dwell ])? Sadly most of the whitepapers I'm finding for this sort of problem revolve around switching algorithms in VHF electronics and Mixed Integer Programming, not vehicle dynamics.

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому

      Hi, Thank you for your comment. for your first question, if the limits can be expressed as a function of the predicted state, then you should be able to formulate it in CasADi.
      I am not sure about your second question, but it looks like you wanna represent your control inputs by another set of timing inputs. i guess if you have the right conversion model between the two, then you should be able to formulate that and have the timing inputs as the decision variables. All the best!

  • @Zakaryaomar-of9ng
    @Zakaryaomar-of9ng Рік тому +1

    رائع جدا ، god bless you

  • @gabrielbehrendt3924
    @gabrielbehrendt3924 2 роки тому +1

    I'm interested in implementing MPC where the optimization algorithm is only allowed to complete a predetermined number of iterations. Is there a way that I can guarantee all the iterations are feasible using Ipopt?

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Yeah, there is. check ipopt manual for such options. in the provided code I am adding the option of the no. of optimization iterations which you can set to your desired values.

  • @RobinHelsing
    @RobinHelsing 2 роки тому

    Awesome video Mohamed! I've succesfully implemented a nonlinear MPC on a PMSM in simulink, but I have one little problem. That beeing I want to minimize "delta u(k) = u(k) - u(k+1)" meaning that I want to take previous control signals in to account. Any suggestions on how to do it? I'm a bit lost.

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Thanks Robin, You can simply add delta the exact way you defined to the cost function. Just square it to be always positive and multiply it by a weight to tune its effect on the cost function.

    • @RobinHelsing
      @RobinHelsing 2 роки тому

      @@MohamedWMehrez Hmm, okey. Another question, delta_u = u(k) - u(k-1) by the way, mistype by me. In the for loop, when i = 1 I need to get the information "u(k-1)" which I was trying to get with a unit delay in Simulink but I get alot of errors and I don't really know where to put it. In the SX "parameter-vector"? Sent you an email btw :D

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      @@RobinHelsing I would do delta_u = u(k+1) - u(k)

  • @thugonomicsstark7238
    @thugonomicsstark7238 2 роки тому +2

    Incredible videos Professor Mohamed, my question is what if i am to try this code on my raspberry/arduino robot, what should i change in the code, i mean, what should i set to my v and omega of my robot, and lets say i have x y theta from my encoders and gyro, where should i put them in the code. i hope you understand my question.

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Thank you for your comment. The way you set v and omega of your robot depends on the software-hardware interface you have. Some interfaces allows you to set v and omega directly; others however, allow you to only set the wheel speeds. In the latter case, you will need to convert v and omega to the right and left wheel speeds then feed those values to your robot. There should be a slide in the lecture that shows the relationship between the wheel speeds and v and omega. x, y, and theta you have from the robot sensors should be used as your state feedback in your control loop.

    • @thugonomicsstark7238
      @thugonomicsstark7238 2 роки тому

      @@MohamedWMehrez thanks for the answer, however, i am struggling with x y theta, lets say i have my measured x y and theta from the robot, where do they go in the code, the for loop that fills the big X ?
      please answer me urgently

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      they should be the first entry in the big X, but remember to do that in the simulation loop.

    • @thugonomicsstark7238
      @thugonomicsstark7238 2 роки тому

      @@MohamedWMehrez Very much appreciated professor, i'll be trying that first thing tomorrow

  • @anandjv9783
    @anandjv9783 5 років тому +1

    Dear Dr. Mohamed,
    Thanks for the video series. Its extremely well explained and easy to follow. I have a couple of questions though.It would be really helpful if you can support me on this at your spare time
    I am trying to implement a simple trajectory planning and tracking problem for an autonomous robot with a model similar to what you have explained.
    1. Could it be possible to soften the state constraints. If so, where do I add them in this design
    2. Could it be possible to add other constraints. Not state constraints, but some other safety constraints to limit the dynamics of the vehicle/robot to the kamm's circle, or limiting the jerk to a certain bound
    Thanks and Regards,
    Anand

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому

      Hi Anand, Sorry for my late reply
      1- I think you may soften the constrains by relaxing the upper bounds and lower bounds on the state.
      2- Yes, you can. You will just need first to formulate your constraints mathematically, then transfer them to Casadi syntax. Look at the code for the obstacle avoidance, in the second video, and it will give you an idea on how to do that.
      All the best,
      Mohamed

  • @diegolocatelli119
    @diegolocatelli119 Місяць тому +1

    First of all thank you for the lesson. I am only wondering if the obj function formulation in the Matlab example is correct. for k=1:N
    st = X(:,k); con = U(:,k);
    obj = obj+(st-P(4:6))'*Q*(st-P(4:6)) + con'*R*con; % calculate obj
    end
    I expect that we have N-1 control actions or at least N+1 state variables according to the equations in the slides

    • @MohamedWMehrez
      @MohamedWMehrez  Місяць тому

      Thank you for your comment. Good question! for k = 1:N, this means that we have N steps to calculate in the obj function. Just keep in mind that indices in matlab start from 1 and not from 0 like in python.

  • @amr.a-m8350
    @amr.a-m8350 3 роки тому +1

    Good tutorial . I hope a basic theory script of MPC if available, please.

  • @40NoNameFound-100-years-ago
    @40NoNameFound-100-years-ago 5 років тому +1

    Very good video...thanks a lot, professor

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому

      You are welcome!

    • @40NoNameFound-100-years-ago
      @40NoNameFound-100-years-ago 5 років тому

      @@MohamedWMehrez , I have a question regarding the implementation using Casadi ,Would it be different if I used the MPC controller toolbox in MATLAB alone ?

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому

      In theory you should get the same results if you are going to use the same optimization solver in both implementation. For example, if you use the interior point method in the two implementations. However, without the symbolic framework of Casadi implementation, you will lose the automatic differentiation feature in your implementation, which makes the MPC run time faster.

    • @40NoNameFound-100-years-ago
      @40NoNameFound-100-years-ago 5 років тому

      thank you sir for the useful tip. will there be more videos in the near future about the MPC ?

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому

      Not quite sure Mohamed. Please check the last few slides of the presentation of this workshop, where I tried to give pointers to things you might do after having this introduction about MPC and MHE. In any case, let me know if you have any question in the future.

  • @abdullaheraslan7905
    @abdullaheraslan7905 Рік тому +1

    Thanks for this perfect tutorial. I have a question. At the beginning of the simulation or real-time experiment, the solver needs more time to find the optimum point. For simulation, it is not a problem. However, it is a problem for real-time experiments. For me, I have a 20 ms sampling period for my project. At the beginning of the simulation, the solver needs more than 20 ms. I found these values by using tic toc command in Matlab. For the next samples, this time reduces to 4 ms. In real-time, it is a problem. Do you know any solution to this problem? I tried to reduce the prediction horizon but it did not work. Thanks in advance...

    • @MohamedWMehrez
      @MohamedWMehrez  Рік тому

      Hi, For the very first control action, you can actually wait until the first control command is ready and then you run the control loop. Also, for the first control action you can reduce the number of iterations or increase the optimization tolerance such that the first optimization step is less computationally demanding; however, this will lead to a relatively inaccurate first control command which should be improved as you continue to the following optimization steps.

    • @abdullaheraslan7905
      @abdullaheraslan7905 Рік тому

      @@MohamedWMehrez Thanks for your answer :)

  • @user-is7cw8us7b
    @user-is7cw8us7b 2 роки тому +1

    Hi I've been watching through your MPC videos, very nicely explained. However, I have one question:
    If one would want to use a time varying scalar weight in the objective function, instead of fixed weight matrice/matrices, which is measured in real time at every time step. Would it still be possible to use a similar multi-shooting approach as you have done in your videos using casadi?
    As an example of how I would implement this by the understanding I've gotten from the videos:
    1. Define a symbol for a scalar weight, w = SX.sym('W',N);
    2. Calculate the object function as usual for N time steps (over the prediction horizon), with w now included in the objective function.
    3. Then in real time, other then sending in the current state to the solver, also send in the measured scalar w.

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому +1

      Hi, Thank you for your comment; You can simply add parameters for the weights that vary every optimization step. And you simply feed them back to the solver the same way you do with the state feedback.

  • @DanHe0077
    @DanHe0077 9 місяців тому +1

    MPC starts at 35:15
    MPC Single-Shooting Matlab implementation starts at 48:16
    MPC Multiple-Shooting theory starts at 1:21:53

  • @啸宇周
    @啸宇周 2 роки тому

    Thanks a lot, Dr. Mohammed !
    I have a question , I followed the lecture and try the MPC technique and multiple shooting on my model of omnidirectional car.
    I take the position ( x,y,theta) as states and take velocity ( vx vy omega) as controls.
    As you mentioned in the video, except constraints on the boundary of controls, we can also set constraints like g=[g; U(:,k)-U(:,k-1)] to set constraints on the acceleration. So for example I want to satisfy that the abs(ax)

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому +1

      hi, thank you for your comment, I guess u need to change your g to
      g = [g; U(:,k)-U(:,k-1)]
      Keep in mind that you want to constraint the change in control. Please note that bounding the acceleration has no effect on the bounds on velocity. You can have both limits added.

  • @AdityaChaudhary-oo7pr
    @AdityaChaudhary-oo7pr 2 роки тому

    Really very exhaustive for beginner level

  • @mengli1392
    @mengli1392 3 роки тому +1

    Dear professor, thanks for your tutorial and code. It is indeed an excellent work! Still, I have a question that the IPOPT can only be applied in a convex optimization problem while the optimization problems in the your vedio might not be convex (e.g., MHE, vehicle trajectory tracking). For example, the quadratic programming equation J=Q'xQ+u'Ru is convex while the related constraints in multiple shooting, st=st+T*f(st,con), might not be convex. Do you have any suggestions on how to prove the convexity of this problem?

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому

      Hi Meng, Thank you for your comments. Actually, IPOPT can be used for optimal control problems that have non-convex objective or constraints. Check this course if you would like to study more about numerical optimization www.syscop.de/teaching/ws2020/numerical-optimization

    • @mengli1392
      @mengli1392 3 роки тому

      @@MohamedWMehrez I got it. Thanks for your prompt reply and you solved my problem well!

  • @anggi1430
    @anggi1430 4 роки тому +1

    Thank you prof for sharing. I have 3 question about this lesson
    1. Can your program be changed on time step series, become MHE first to MPC? I mean its like reverse from your program. If it can be like that, how to changing?
    2. How if your program changed become deterministic and have not noise?
    3. Where are you getting measuring data?
    Thanks a lot prof

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      Thanks angii,
      1- I presented the two methods separately and not in series. You will need to integrate the two methods MHE and MPC such that you do the estimation first via MHE and use it as a feedback for MPC.
      2- Normally, when you do state estimation, you need to show how good is your estimator against noise. Of course, you can remove the noise from the provided code and it would still work.
      3- Here, I am synthesizing the measurement by calculating the range and bearing and then add noise to them, I provided more details on that when I talked about the MHE part.
      All the best,
      Mohamed

    • @anggi1430
      @anggi1430 4 роки тому +1

      @@MohamedWMehrez thank you prof, thats so helping my final project

    • @anggi1430
      @anggi1430 4 роки тому

      @@MohamedWMehrez Sorry prof, i have other question.
      What needs to be changed from your source code, if the equation is linear?
      Thanks a lot

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      You can apply the same method if the system is linear. Remember that the nonlinear case is the general case 😃
      Nonetheless, you can a simpler way to Integrate your system because linear systems have closed-form solutions to their integration.
      Finally, for both linear and nonlinear systems you can use a more accurate integration scheme like Runge Kutta. I am planning to make a tutorial on this using Casadi. Stay tuned 😊

    • @anggi1430
      @anggi1430 4 роки тому

      @@MohamedWMehrez thanks for your answer, prof... im so enthusiastic to waiting your next tutorial^-^

  • @welidbenchouche
    @welidbenchouche 2 роки тому

    Hello professor, i Wish to use the code on a real robot, can you please guide me to a book, a video or anything, where do i start?

  • @mauri1406
    @mauri1406 5 місяців тому

    Hi, can you suggest me a way to implement a time variable disturbs in the equation of the system? ( I don't want them to be a viariable to be optimized but only something like a gaussian noise)

  • @jaydippokiya925
    @jaydippokiya925 4 роки тому +1

    Hello!! thank you for the tutorial.
    I have problem, if I run the motivation example 1 at the execution of code line
    x = SX.sym('w'); it shows error saying casadi.MEX function does not exist in GenSX

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      Hi Laydip, just make sure that the path to Casadi package is set to your local computer.

  • @bashhau
    @bashhau 4 роки тому

    Thank Mohamed. I've adapted your code and have sucessfully implemented a multiple shooting MPC Tracking but I was just wondering in single shooting, since we only use the control as optimization variable, can we removed the states optimization from the cost function i.e set vector Q=0?

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      Hi Bashir, then what will be your objective? When you use single shooting, the state is explicitly a function of the control. So, if you put Q= 0, then your objective is to only minimize the control and in this case the optimal solution will be u = 0.

    • @bashhau
      @bashhau 4 роки тому

      @@MohamedWMehrez Ohh yah now I get the picture. Sorry for the silly question :)

  • @emalumiere6753
    @emalumiere6753 4 роки тому +1

    Hello!! thank you for the tutorial.
    can i use this code to control position arm 2dof (by using dynamic equations), if i change the model of system ???

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      Thank you. Of course you can.

    • @emalumiere6753
      @emalumiere6753 4 роки тому

      I use model of arm from ADAMS software (Co-Simulation ADAMS/ MATLAB-Simulink), so i don't know what should i do to use MPC controller with this model, can you help me?

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      If you can import the model from ADAMS and include it into your MPC code that would be how you adapt the provided code to your problem. I didn't have much experience with casadi and simulink; nonetheless, you can find tutorials about that on casadi's website here: web.casadi.org/blog/

  • @hari5357
    @hari5357 3 роки тому +1

    Hello Mehrez,
    Thank you so much for for this series.
    I understand that higher weights minimises the error and vice versa but How to find the exact Weighting factors values?

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      Hi Hari, Thank you for your comment. designing the weighting matrices depends on many factors, e.g. stability, performance, etc. If you look up the research gate for questions like (tuning mpc parameters), you will find a lot of guidelines.

  • @ahmeddesoky971
    @ahmeddesoky971 5 років тому +1

    Thank you Dr. Mohammed for your valuable lecture
    I have one question;
    The real control input to the robot is the rotational speed of the left and right wheel.
    So how to map these wheel rotational speed to V and Omega that you used.

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому +1

      You're welcome Ahmed. There is a transformation between the body speeds (v and Omega) and the wheels speeds. This transformation is already mentioned in the slides, see slide 28.

    • @ahmeddesoky971
      @ahmeddesoky971 5 років тому

      @@MohamedWMehrez Thank you,
      I am proud of you and proud also you are Egyptian.
      I ask how to put this mapping in the simulation code.
      Wheel it's useful to implement it to experimental platform where the command is voltage that control directly on wheels speed.

  • @nitinsharma1449
    @nitinsharma1449 5 років тому +1

    Thank you so much professor for this wonderful lecture on MPC. I would like to ask you about a sample matlab example provided in mpctools folder. I am referring Vanderpoll Oscillator (titled as vdposcillator) and struggling in understanding what is the problem statement and what is the coder trying to achieve in this example. Can you please explain or provide a brief on this example?
    Thanks
    Nitin

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому

      Hi Nitin, Thank you! unfortunately, I am not so familiar with the mpctools package.

    • @nitinsharma1449
      @nitinsharma1449 5 років тому

      @@MohamedWMehrez No Problem Professor.

  • @alisayed2665
    @alisayed2665 4 роки тому

    Dear professor, thank you so much for this amazing workshop/tutorial. I am new in the fields of optimal control and optimization in general so please excuse if the answer to my question is considered obvious or simple. Is the optimal control problem formulated in this video a convex optimization problem? and based on that is the solution of this optimal control problem the most optimal solution or just a local minimum ?

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      Hi Ali, thank you for your comment. Convexity of the optimal control problem depends on that of the objective function as well as the constraints. For linear systems, MPC problem can be convex; however, for nonlinear systems, the resulting MPC problem are not necessarily convex especially when nonconvex set of constraints are used for states (e.g. the obstacle avoidance case) or controls.

  • @leonardoanicio
    @leonardoanicio 5 років тому +1

    Thank you Dr. Mohammed. I'd like to ask you one question, how could I make the robot from the example move to a desired point (x,y) and let the final orientation free?

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому +1

      You are welcome! You can simply penalize the orientation deviation in the running costs by zero in the given code.

  • @lunazhang5853
    @lunazhang5853 5 років тому +1

    Impressive! Thank you, professor. I have a question, is the code you shared also able to be implemented for a vehicle dynamics model (have more difficult differential equations)?

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому

      You're welcome! Absolutely, you can adapt the code to your model.

    • @lunazhang5853
      @lunazhang5853 5 років тому

      @@MohamedWMehrez Thank you for replying again.

    • @lunazhang5853
      @lunazhang5853 5 років тому

      @@MohamedWMehrez Hello professor, one more question, I notice that the state vector of your state space equation can be directly used to objective function, but if it needs integration before being used in objective function, should I define another "state vector", thank you!

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому

      @@lunazhang5853 I am a bit confused of why you would need an integration of the state vector to be penalized in the cost function. In any case, you certainly can define new states and penalize them in the MPC cost function.

    • @lunazhang5853
      @lunazhang5853 5 років тому

      @@MohamedWMehrez Thank you for your response! For example, my state vector is longitudinal velocity and lateral velocity, but I want to used X, Y position in the cost function. I mean if I define a new state vector (x,y, yaw_angle), then the state lower/upper bounds will be rated to my new state vector, while my state space vector is (Vx,Vy,yaw_rate,yaw_angle), then it will lack the velocity lower/upper bounds?

  • @antonete125
    @antonete125 3 роки тому

    In order to get a larger prediction horizon. Do you think is possible that my MPC controller ran for example at 100 Hz while internally the predictions are performed for example at 10 Hz? Thanks in advance

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      That should work I guess, but I believe that the performance won't be as good. Just keep in mind that in that case, the optimal control sequence will be for a mismatched frequency and that would degenerate the performance.

  • @alamshakil5414
    @alamshakil5414 8 місяців тому

    Hello Sir, I need help with MPC and Obstacle Oviedance. My master's thesis is on nonlinear MPC. How I can contact you Please?

  • @abdurrahimemin1371
    @abdurrahimemin1371 5 років тому

    Great video and excellent explanation! I have a question, though. What if the system we're trying to control has some nonlinear inequality constraints on the inputs, where should we define those?

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому +1

      Check the beginning of the second video in the lectures series, and you should get an idea on how to do that.

  • @sarasijbanerjee804
    @sarasijbanerjee804 4 роки тому

    Grateful for the tremendous lecture. I had a question. While discretizing the diff eqns, what if the system has of the order of 100 of eqns? How can the RHS be defined in a loop. Indexing creates an issue everytime I try defining the system of DE's rhs(i)=f(x,y,t) where rhs refers to the differential vector. Thank you!

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      Thanks Sarasij, your question is quite interesting. what kind of issue you have with the indexing? I guess if you put in rhs as defined in the code as many equations as you want that shouldn't be an issue. By the way, since you have a large number of equations, I suggest that you use the Runge Kutta method for discretization. There is a video in this playlist that shows you how. Sorry for my late reply.

    • @sarasijbanerjee804
      @sarasijbanerjee804 4 роки тому

      Dear Dr Mehrez, I shall definitely follow the video on Runge-Kutta.
      Meanwhile, about this issue, I have solved it. I was wondering what if we have multiple objective functions. Can we define it in an array...like obj(1) obj(2)...obj(100)? When I defined obj as SX.sym(100,1) the issue was resolved. But that's it, I had to explicitely define it. Anyway, thank you for your reply. The videos are exceptionally helpful.

  • @SbAlmagro
    @SbAlmagro 4 роки тому

    @Mohamed W. Mehrez I have a question regarding ROS implementation.
    I read that is not possible to generate a code while using Ipopt solver. groups.google.com/forum/#!searchin/casadi-users/simulink|sort:date/casadi-users/rnBC_a2WULk/slpyu-QGAgAJ
    Did you use Acados instead or OpEN or a different solver SQPMethod+QRQP?
    My final goal is to have the MPC running in an embedded system (ROS based).
    Thank you very much for your kind help.

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      Hi Sebastian, I haven't used Acados before but it's the new version of Acado which I already used in my PhD work. I remember that I was using a solver called qpoases.
      If you run ROS on an on board computer, you don't need embedded programming. you can simply transform the provided code to python and integrate it with Ros. This is what we made in this paper
      www.sciencedirect.com/science/article/abs/pii/S0921889019306232

  • @muhammadhd8407
    @muhammadhd8407 2 роки тому

    Greetings @Mohamed W. Mehrez . Assume we need to apply different constraint values on x and y. I mean for example for coordinate x I need to put the range of the map is [-20 , 20] and for y another values. How can do that in args.lbg and args.ubg?

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      This can be done by simply adding those values to the bounds. Simply replace the numbers in the code with the numbers you want.

  • @jeffb3761
    @jeffb3761 2 роки тому

    Hello! I have a perhaps arbitrary question but, how does the solver deal with the model (function, states, rhs) of the system? I am working on some models of my own, but I noticed that when you use the control outputs of, for example your multiple shooting mpc code, and iterate those with timestep T (like your shift function) you will achieve quite some different states compared to the states ipopt itself outputs every timestep. Does casadi/ipopt calculates these states as part of a continuous ODE and if so, is it possible to change how it handles that?

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Hello, I believe that those differences are due to the numerical tolerances the solver use. You can check the manual for that.

  • @khaliddandago3436
    @khaliddandago3436 2 роки тому

    Hello. Is the Draw_MPC_stabilization_v1 function a built-in casadi function? Or did you write it?

  • @prajwalthakur8462
    @prajwalthakur8462 4 роки тому

    @mohamed sir , thank you for such a nice workshop.
    I am working on the longitudinal flight control model which is a ofcourse a "non linear model" , I want to use mpc to control the altitude of the plane . I have few question regarding this..
    a. can we use the method outline here to control the plane (ofcourse by changing the design parameters)
    b. I have the linearized state space model at a "steady state point". say for forward speed of 40m/s and no tilt.
    how could I use this state space model instead in mpc rather than explicitly defining input and output relation as you did?
    I am assuming I have to compute the state space matrix at each time step .
    do you have any reference where I can learn how to solve such non linear system.
    3. the method you have shown here ,is it a non linear mpc control?
    Thank you for your help.

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +2

      hi Prajwal, thank you:
      a- Of course you can.
      b- If you have the linear model in the form xdot = Ax + Bu, remember that this is a special case of the nonlinear form xdot = f(x,u). You can make A and B as parameters that change every time step.
      c- it is a nonlinear MPC yes.

    • @prajwalthakur8462
      @prajwalthakur8462 4 роки тому

      ​@@MohamedWMehrez thank you for clearing my doubt, I have another related to above suppose I have a model in form of
      Xdot=AX +BU
      Y=CX
      and I want to track "Y " signal
      now i do not need to track all the states but few states or maybe a function of few states
      1. How do I define the model constraints now in CASADI I mean how to define constraint for
      a>X(k+1) =AX(k) +BU(k) and for b> y(k+1)=C*X(k) ?
      Thank you for your help.

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      You can define it the exact same way you define the constraints for X. You will just need to implement the output equation y=cx and add y to the constraints vector g. Then, you specify a proper upper bound and a lower bound for these new constraints.

  • @ahmedwael3564
    @ahmedwael3564 2 роки тому +1

    KING

  • @selvarajp1703
    @selvarajp1703 Рік тому

    Sir, your lecture is really amazing and more helpful to understand new concepts easily. However, I have one question, in this case, the control signal updated each update each T= 0.2. From the graph states seems to be continuous. What can I choose step size of state? If I choose that as h=0.01 then x(k+1)=x(k)+h f(x(k))? Please clarify my doubts please

    • @MohamedWMehrez
      @MohamedWMehrez  Рік тому

      Hi, thank you for your comment. Your observation is correct. The update model doesn't change when you change the step size

    • @selvarajp1703
      @selvarajp1703 Рік тому

      ​@MohamedWMehrez Thank you very much, Sir. I have one more doubt, is this u^ref=[0;0] or anything else. Also, how to select better objective weighting matrix wisely? Is there any systematic way? If so , please suggest some reference book or material to understand it.

  • @abababab8193
    @abababab8193 2 роки тому

    Thank you professor for this lecture. I would like to ask you about an error,that i become "Conversion to logical from casadi.MX is not possible" is there any solution to fix this?

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Hi, I am not sure to be honest what the reason could be. Just make sure that you are following the types used in the code.

  • @subhankhan6180
    @subhankhan6180 5 років тому +1

    Thank you for this excellent session Dr. Mohammed. I have few questions: How can I convert this point stabilization problem into trajectory-tracking one? Secondly, is there any way to make obstacles dynamic? Waiting for your kind reply. Thanks in advance :)

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому +1

      Hi Subhan, Yes you can modify the code to consider trajectory tracking and moving obstacles. For the trajectory tracking, you would have more parameters in the parameters vector P, i.e. you would have as many reference states in this vector as your prediction horizon length. Note that the point stabilization is a special case of trajectory tracking, where all the reference states have simply the same value. Then when you're constructing your cost function later in the code you would subtract the corresponding reference state from the predicted state.
      For the moving obstacle case, you would also have more parameters that tells the optimizer the predicted locations of the obstacle. Then you will integrate these locations in the constraints vector g.

    • @subhankhan6180
      @subhankhan6180 5 років тому

      @@MohamedWMehrez Thanks for your kind reply. Highly appreciated :)

    • @nitinsharma1449
      @nitinsharma1449 5 років тому

      Hey Subhan! Did you get any success in converting the point stabilization problem into trajectory-tracking?

    • @nitinsharma1449
      @nitinsharma1449 5 років тому

      @@@MohamedWMehrez Dear Professor, For the trajectory tracking, suppose my prediction horizon is 3 and states as we know are also 3,then my P vector will be 3X6 matrix and last 3 columns in the P matrix will be updated in each iteration to get the corresponding reference states. Am I correct? Please correct me if I am wrong.

    • @MohamedWMehrez
      @MohamedWMehrez  5 років тому

      @@nitinsharma1449 hi Nitin, it depends on how your running costs look like. Normally, in a trajectory tracking problem, you penalize (in the running costs) the deviation between the state prediction and its reference as well as the deviation between the control prediction and its reference. In the special case of point stabilization, these references (for state and controls) are stationary. However, in a tracking problem they will be dynamic. Therefore, you will have as many parameters as (N*nx + N*nu) where N is the prediction horizon length, nx is the number of states and nu is the number of control inputs. These parameters need to be then included in the creation of the "obj" variable in the code.

  • @STWNoman
    @STWNoman 4 роки тому

    sir i am working on Fuel cell optimzation , will it be a good approach for my case , specially for High impact factor work

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      If you can formulate your objective as function, then why not?

  • @zamanyazbeck1136
    @zamanyazbeck1136 2 роки тому

    Dear dr mohamed ,
    If the r.h.s equation include some parameters.
    (Because i’m writing the dynamic not the kinematic model)
    How can i define the parameters

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Hi Zaman, you can simply add the values for those parameters directly in your formulation. If you want to update their values every sampling instant, you can define them as additional parameters in the parameters vector P.

  • @SbAlmagro
    @SbAlmagro 4 роки тому

    Dear Professor Mehrez,
    Thank you very much for such a good explanation regarding the MPC. In minute 44:51, regarding trajectory tracking, you mention that the given a path following problem the controller has an extra DoF for choosing where to start to follow a path.
    Do you mind to clarify that approach?

    • @SbAlmagro
      @SbAlmagro 4 роки тому

      For example, a certain path is given by a list of poses [x, y, theta], and I would like to follow it. How could I extend the approach in *Part 3: MPC for trajectory tracking* for this? This paper is related to it: www.researchgate.net/publication/315678321_Predictive_Path_Following_of_Mobile_Robots_without_Terminal_Stabilizing_Constraints

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      Thanks Sebastian, In path following, the reference is not an explicit function of time but rather an extra parameter (path parameter) whose time evolution is not set apriori and is left for the controller to decide.
      The paper you mentioned explains how the path following problem works exactly. You can understand the idea and the complete problem formulation by reading only sections 1 and 2. You can also skip Equation (5) in that paper safely (it was used for only a theoretical purpose in the paper).

    • @SbAlmagro
      @SbAlmagro 4 роки тому

      Dear @@MohamedWMehrez thank you very much for your reply. I will go further in this topic in the following months.
      1.Regarding this *Part 1* example, how could I implement it in a close loop (without MHE yet)? Should I send to the solver one point at time, or should insert a list of poses (as a path)?
      2.The shift function is in charge of applying one control step?
      Thank you for your kind help.

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому

      Hi Sebastian,
      1- you send a single reference if it's a point stabilization problem and you send a list of reference points if it is a trajectory tracking problem. Check part 3 of this workshop for more details on trajectory tracking.
      2- the shift function is always the same, but remember that in the case of path following, you consider both the system dynamics and the path dynamics. Check the paper above for details.

  • @prajwalthakur8462
    @prajwalthakur8462 3 роки тому

    does the casadi has the capability to use neural networks for constrained optimization . Actually I wanted to use specifically neural network for model prediction
    ie x_(k+1) = f(x_k, u_k) , where f() is a neural network .
    if yes can you point me to a paper/blog/example of demonstrating how can we do that? I couldn't find good example on their official blog.
    And thanks once again , for uploading such great videos!

    • @MohamedWMehrez
      @MohamedWMehrez  3 роки тому +1

      Thank you for your comment. I guess using a neural network as a prediction model is a little tricky. However, if the NN model is differentiable w.r.t. the optimization variables, then there should be no problem. I guess an NN that uses a sigmoid functions should satisfy such a condition. I am not quite sure though. I haven't done that before, so I guess you'd better search for related resources yourself. All the best!

    • @prajwalthakur8462
      @prajwalthakur8462 3 роки тому

      @@MohamedWMehrez So I searched and we can include the TensorFlow models!

  • @subhankhan6180
    @subhankhan6180 4 роки тому

    Hello Dr. Mohammed,
    Just a quick question regarding handling the disturbances (such as vehicle slippage) using NMPC in casadi. How would you handle random noise or white noise based disturbances in the system? Any suggestions.

    • @MohamedWMehrez
      @MohamedWMehrez  4 роки тому +1

      Hi Subhan, I have no personal experience with such problems, but your lookup keywords are robust MPC and tube MPC.

  • @Qaidi_804
    @Qaidi_804 5 місяців тому

    When we don't have constraint on MPC does it solves QP problem?

  • @dynamicbdg2123
    @dynamicbdg2123 3 роки тому +1

    thanks a lot

  • @muhammadhd8407
    @muhammadhd8407 2 роки тому

    What did you mean when you said at 1:13:49 "if i have sampling rate of 10 HZ ...."? Suppose we implement the process on microcontroller. The average computation time of the one MPC iteration is 17 ms. Therefore, the one loop time of the whole process will be more 17 ms. Let's the microcontroller works at 10 hz and then the loop time will be 100 ms which is enough to compute one iteration of the MPC. So what is the relation between the sample rate of the whole system and increasing the speed of one MPC iteration? I think that i am confusing in somewhere. Thank in advance.

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Hi, in general the sampling rate (which is the rate at which you measure a new feedback) should be >= the mpc computation time. so if you compute your mpc control signal in 17 ms then you should apply it then wait for 83 ms in the loop before you measure a new feedback.

    • @muhammadhd8407
      @muhammadhd8407 2 роки тому

      ​@@MohamedWMehrez Thank you.

  • @haoranzhang1575
    @haoranzhang1575 2 роки тому

    Hello Dr Mohamed,
    I'm trying to add terminal equality constraints to ensure the stability using CasADi. However, this is not included in the example. Could you give me any recommendation about how to achieve this in the code?

    • @MohamedWMehrez
      @MohamedWMehrez  2 роки тому

      Hi, Just add that constraint to the vector g for the last state in your prediction horizon.

    • @haoranzhang1575
      @haoranzhang1575 2 роки тому

      @@MohamedWMehrez Thanks for this. It is very helpful.