Linear Quadratic Regulator (LQR) Control for the Inverted Pendulum on a Cart [Control Bootcamp]

Поділитися
Вставка
  • Опубліковано 15 січ 2025

КОМЕНТАРІ • 93

  • @osamaahmadieh4660
    @osamaahmadieh4660 4 роки тому +29

    What is fascinating about this series of Lectures is that you always link the abstract mathematical quantities to an actual physical interpretation. This makes understanding the concepts much easier and familiar. Thank you for taking the time to make such helpful lectures! looking forward to watch the data driven control!. Thank you !

  • @gabrielh5105
    @gabrielh5105 4 роки тому +56

    Watching this at 2 am before sleeping became a habit for me. And it has worked, I wake up in the morning with a clear mind of what you just taught me.

    • @Eigensteve
      @Eigensteve  4 роки тому +2

      Very cool!

    • @klam77
      @klam77 3 роки тому

      Watching this cleared my mind too. I don't remember ANY thing

    • @georgechi877
      @georgechi877 3 роки тому

      UR so powerful that the cow is on the sky.

    • @cooperkane4077
      @cooperkane4077 3 роки тому

      You all probably dont care at all but does any of you know a trick to log back into an instagram account??
      I was stupid forgot my login password. I would love any assistance you can offer me!

    • @lorenzojabari6465
      @lorenzojabari6465 3 роки тому

      @Cooper Kane instablaster :)

  • @yahugh59
    @yahugh59 Рік тому +1

    I ostensibly learned all this in grad school...and then forgot it all in the intervening 15 years when I went off and worked on other things. But now that I'm back to designing control systems, this series of lectures has really helped me get the rust off my skills!

  • @chunchang7285
    @chunchang7285 3 роки тому +3

    insanely high-quality teaching!

  • @АндрейЧегуров-ц8з
    @АндрейЧегуров-ц8з 4 роки тому +10

    Dear Professor, Great Job!

  • @ashegofd2080
    @ashegofd2080 3 роки тому +1

    You are a legend
    I was thinking what I'm doing with control systems, poles, eigen values.... I m not getting the practical example
    But hey you helped me
    Thanks a bunch
    I hope every enthusiastic student finds prof like you
    Good job man 👍🏻

  • @alial-ghanimi8357
    @alial-ghanimi8357 2 роки тому

    I like the way that you have used to present the cost function. Using an intuitive explanation is the key of your unique strategy which amazed me the most. Well done.

  • @manhhoang1858
    @manhhoang1858 5 місяців тому

    Teaching complicated problem in easy way. Thank you Professor!

  • @Drone.Robotics
    @Drone.Robotics 4 роки тому +5

    Thank you Sir....I have seen the whole series and it have cleared lot of my concepts about control theory. Your videos are just great and your way of teaching complex things in simple manner is appreciable. Thanks Again.

  • @vadimgerasimenko4904
    @vadimgerasimenko4904 3 роки тому +1

    Thank you Mr. Brunton!

  • @augustogomez9372
    @augustogomez9372 3 роки тому +1

    Writing words backwards on the board is quite a job! Great video profe!

    • @sidosoft
      @sidosoft 3 роки тому

      The video is mirrored ;)

  • @gayathrimenon8983
    @gayathrimenon8983 3 роки тому +4

    Such a wonderful series!!! Thank you so much professor. Just wanted to ask you whether LQR requires any improvement in its performance. If so, then by combining it with any other controller can any improvement be brought?

  • @MrFiskur1
    @MrFiskur1 3 роки тому +1

    Thank you for the video, sir! Really a good explaination of LQR. I have question though. At 9:20 you compute the eigenvector to the most stable eigenvalue and mention that the most stabilizing directions are x_dot and theta_dot. So aggressive control on x_dot and theta_dot would really improve performance.
    So my question is: Isn't it a good idea to have high values of the 2nd and 4th diagonal entries of the Q matrix, as these correspond to x_dot and theta_dot, and then have lower values at the 1st and 3rd entries?

  • @hariranga2463
    @hariranga2463 4 роки тому +5

    Hi Steve! That was a great introduction to LQR. Is there any chance you could share the MATLAB code for the inverted pendulum, I would be excited to see it work? Thanks!

    • @Eigensteve
      @Eigensteve  4 роки тому +11

      Thanks! All code is available at databookuw.com under the CODE.zip link

  • @enjinia8307
    @enjinia8307 Рік тому

    Wonderful lesson!

  • @rohitn6910
    @rohitn6910 3 роки тому +3

    The line he said,"It's interesting (taking a small pause) and it's complicated "..This are the situation we are facing 😅😅😄👍🏾

  • @杨凯迪-l4o
    @杨凯迪-l4o 4 роки тому +1

    Hi steve, it's a super video for LQR,but there is a one point that I can not understand, for the objective function J, why we don't use (X(t) - set point(t)) but we use X to minimize J?? We want x to get close to our set point @steve brunton

  • @thaboranamane4534
    @thaboranamane4534 4 роки тому

    I can't help but wonder whether you're writing everything backwards behind that screen so we can see it normally. Is that the case?

  • @murat9480
    @murat9480 4 роки тому +2

    How can we add disturbance and measurement noise to simulink model.

  • @abhinavsingh3091
    @abhinavsingh3091 3 роки тому

    Thanks for video. But i have one question totally unrelated to that " how are you writing on board?" Have u trained yourself to write inverted ?

  • @ajj7794
    @ajj7794 4 роки тому +1

    Quick question, how do I change my matrixes A,B,C and D in my state space block in simulink so that k is taken into account of. I currently am simply replacing A by A-BK for my full state. It looks wrong

  • @matthewjames7513
    @matthewjames7513 2 роки тому

    Amazing video. Is there a value of 'R' (the penalty of motor usage) for which the system can't find an ideal 'K' (linear feedback controller)? Intuitively, I can imagine if R was too large then the cart simply can't move fast enough to keep the pendulum up (in other words, lqr() can't make all real parts of the eigenvalues negative)

  • @comvnche
    @comvnche 2 роки тому

    In my MPC lecture eigenvalues where only said to be stable if they are

  • @cheriyanhomey4708
    @cheriyanhomey4708 4 роки тому

    Thank you sir, this video series was very helpful to get an introductory idea on control theory .It would be very nice if u could kindly provide us the sources from where we can get to know more about the mathematics behind these concepts
    Thanks again for this wonderful series.

  • @mauritsrietveld732
    @mauritsrietveld732 4 роки тому

    Great videos! I am wondering why the input vector is 4-dimensional. Isn't it the case that we only can control the acceleration in x? Is it just for this example to explain the LQR, or am I missing something?

  • @abcxyz4207
    @abcxyz4207 3 роки тому +1

    How do I get a black background in MATLAB? :OOO

  • @kevingeraldsitanggang9736
    @kevingeraldsitanggang9736 2 роки тому

    Excuse me sir, i have a questions. In the control system using LQR, the output will be the value of the gain matrix K with an order of 1x2. What I want to ask is :
    1. What are the meanings of the matrix values of K11 and K12?
    2. How to use such matrix values K for implementation on DC motors?
    Because i already learned how to design a control system of DC motor using the simulink on MATLAB but i couldn't implement it on the DC motor because i don't know what is the connection between the matrix K value to the DC motor.

  • @navsquid32
    @navsquid32 3 роки тому

    If you don't have MATLAB (like me), the Python control library mimics most of the commands from MATLAB. Just be sure to do things like ensure your B matrices are shaped like (-1,1), etc.

    • @alpeshmahajan5474
      @alpeshmahajan5474 Рік тому

      can u tell me what values do I need to put in the matrices A,B,C,D

  • @johnoche6510
    @johnoche6510 3 роки тому

    This is awesome. Can you please help me with the complete MATLAB code?

  • @sakuranooka
    @sakuranooka 3 роки тому

    Do the Q and R matrices correspond to the matrices of the same name in the Kalman filtering context, where Q is process noise and R is measurement noise?

  • @amr.a-m8350
    @amr.a-m8350 4 роки тому

    Good video 7 effort. I hope you going to demonstrate the MCP with the real application supported by codes & Simulink on Matlab such as this please if available.

  • @saravanana5419
    @saravanana5419 4 роки тому

    Great job. Your videos help a lot. Please provide the link to matlab code . Thanks

  • @bhushankumar5317
    @bhushankumar5317 3 роки тому +2

    🙏

  • @perauterion
    @perauterion 8 місяців тому

    amazing!

  • @francescomarata2641
    @francescomarata2641 4 роки тому

    Thank you Sir for the clear explanation! I would like to ask just a question about the last simulation made with R=10. I have noticed that the cart does not reach the fixed point, at least as regard the first state x, which is supposed to get to x=1. My hypothesis is that, since we set the actuation as really expensive, the control variable should be kept enough low and, as consequence, the time needed to reach the fixed point is longer than 10, which is the time we set as "tspan" in the Matlab code. Is this reasoning correct?

  • @hakansezen2019
    @hakansezen2019 4 роки тому

    Sir, according to what we change Q matris theta and theta dot values which is 10,100 values. For aircraft stability should I same Q matrix which you used

  • @wizardOfRobots
    @wizardOfRobots 4 роки тому

    How would we know if our motor is so weak it can't exert enough force to balance the cart with the given constraints? Is it given by the checking the values of K in (A-BK)x after solving for it?

  • @kaierliang
    @kaierliang 4 роки тому

    Hi professor, I saw the code late using K*(y - y_des) for non-zero fixed point. The cost function for lqr "J = x'Sx', If we calculate the K and S using lqr without information about the fixed-point, Is that mean the cost J is a constant regardless which fixed-point I want to be fixed?

  • @mirsadbucak2502
    @mirsadbucak2502 4 роки тому +1

    Very very good explanation. Thanks a lot. Could you please share the code? I didn't find any link at your website.

  • @HasnainRaza-yr7fl
    @HasnainRaza-yr7fl 4 роки тому

    Thank you for helping us practically understand and visualize the concept.
    I had one question though; while implementing it, when I gave the system a step input it turns out the cart doesn't go one unit and in fact there is some steady state error. Should it be there and if yes then the LQR alone cannot fix that?

    • @MonkeyGameProduction
      @MonkeyGameProduction 4 роки тому

      If you want no steady-state error for a step input, I think you would need to additionally implement an Integrator to the system

  • @ankitchatterjee5343
    @ankitchatterjee5343 4 роки тому

    Sir can you show how can we implement a data driven LQR i.e when the model is not known?which is the best method to go for?

  • @alexandermaverick9474
    @alexandermaverick9474 4 роки тому +1

    how could you put in the limitation of a motor, for example? so that you'd have in a practical assignment a bunch of motor options, how would you approach calculating the ideal eigs for each of the motors (with their speed limitations) so that you can compare price with time? I'm asking because motor prices are often not liniar with their power, not do they cover the entire spectrum of possibilities, unless you're willing to build one from scratch.

    • @Eigensteve
      @Eigensteve  4 роки тому +2

      This is a really interesting question, and there are a lot of interesting offshoots. Including the real cost of hardware with their performance is more of a high-level multi-objective optimization problem. All big companies that design complex systems and control algorithms (think about a GE engine) need to perform these large optimizations to balance design and cost tradeoffs. More generally, in LQR, it is difficult to put in some types of limitations on the hardware. Model predictive control is a very flexible framework to incorporate some of these constraints directly. Video on MPC: ua-cam.com/video/YwodGM2eoy4/v-deo.html

    • @alexandermaverick9474
      @alexandermaverick9474 4 роки тому +1

      @@Eigensteve That seems incredibly interesting! Model predictive control feels very close to a sliding window algorithm in programming, where you re evaluate your options every time you take a step. Is it right to say that both of these methods apply very well with cases where ideally there is a straight path between the initial position and the goal, but would break if there was a limitation stopping them to do that?

    • @alexandermaverick9474
      @alexandermaverick9474 4 роки тому +1

      @@Eigensteve Sadly I have learnt from scratch MATLAB and implementing an inverted pendulum in the last two days, so I am quite tired. But would you greatly mind if I would ask by email some more questions like this? I already have quite a few too many for a youtube comment section.

    • @Eigensteve
      @Eigensteve  4 роки тому +1

      @@alexandermaverick9474 Interesting question. I'm not an expert in path planning around obstacles, but I imagine that if your MPC horizon is long enough to see around the obstacle, it might still work.

    • @Eigensteve
      @Eigensteve  4 роки тому +1

      @@alexandermaverick9474 No problem, although it might take me a while to respond.

  • @VictorPalamim
    @VictorPalamim 4 роки тому

    Thanks professor! I'm trying to use MatLab command [N,D]=ss2tf(A,B,C,D) to get the TF, but I don't know which C and D matrix to use. Maybe C = [1 0 0 0]; D = zeros(size(C,1),size(B,2)) ?

    • @alpeshmahajan5474
      @alpeshmahajan5474 Рік тому

      did u get it right?
      If yes can u help with something similar

  • @95_Ends
    @95_Ends 2 роки тому

    Best!

  • @alpeshmahajan5474
    @alpeshmahajan5474 Рік тому

    Can anybody tell what values do I need to put in the matrix A,B,C,D?

  • @Mr3kner
    @Mr3kner 4 роки тому

    How do I get your Matlab code?

  • @piotr780
    @piotr780 2 роки тому

    why his A matrix second column is not [1,0,0,0] but more complicated equations ?

    • @guzman8001
      @guzman8001 5 місяців тому

      Because he is considering drag between the floor and the cart

  • @XuanJr.
    @XuanJr. 10 місяців тому

    Can u show me the codes?

  • @priyankasinha5253
    @priyankasinha5253 3 роки тому

    matlab code?

  • @pnachtwey
    @pnachtwey 2 роки тому

    When in doubt, place the closed loop poles on the negative real axis in the s-domain. If the response isn't fast enough then move the closed loop poles so they are more negative on the negative real axis. I need to figure out how to do this in python. LQR doesn't worry about keeping the closed loop poles on the negative real axis because it places the closed loop zeros close to the closed loop poles effectively canceling them out or reducing their effect. One thing that was mentioned is the resolution of the feed back. In simulation the feed back resolution is infinitely fine whereas in reality it isn't. This limits how aggressive the gains can be.

  • @thalpaf5069
    @thalpaf5069 3 роки тому

    Is he writing backwards?

  • @LisaYoung-c2r
    @LisaYoung-c2r 4 місяці тому

    Annette Mountain

  • @3131ilyes
    @3131ilyes 4 роки тому

    sorry i could not find the codes , can you please help me

    • @andrewsoong8817
      @andrewsoong8817 4 роки тому +1

      databookuw.com/
      it's below the big picture of the book. above the author's portraits. you have matlab and python options for the code.

    • @prajonty
      @prajonty 4 роки тому

      @@andrewsoong8817 Thanks man

  • @BarbaraGarcia-q2j
    @BarbaraGarcia-q2j 3 місяці тому

    Lopez Kenneth Jones Elizabeth Lee Eric

  • @GrayArno
    @GrayArno 4 місяці тому

    Garcia Laura White Anthony Young Jennifer

  • @MelissaWilliams-m4r
    @MelissaWilliams-m4r 3 місяці тому

    Jackson Michael Walker Barbara Williams Richard

  • @UlyssesSean
    @UlyssesSean 4 місяці тому

    Walker Jeffrey Harris Matthew Moore Susan

  • @maxxxy910
    @maxxxy910 3 роки тому

    matlab people thinking they're programmers:

  • @pesmasterji5900
    @pesmasterji5900 Рік тому

    You speak like "Sheldon Cooper".

  • @JamesAllen-c8v
    @JamesAllen-c8v 3 місяці тому

    Moore Sarah Walker Gary Robinson Timothy