Converting Constrained Optimization to Unconstrained Optimization Using the Penalty Method

Поділитися
Вставка
  • Опубліковано 14 жов 2024

КОМЕНТАРІ • 91

  • @ChristopherLum
    @ChristopherLum  4 роки тому +19

    In case it is helpful, all my Optimization videos in a single playlist are located at ua-cam.com/play/PLxdnSsBqCrrHo2EYb_sMctU959D-iPybT.html. Please let me know what you think in the comments. Thanks for watching!

  • @timproby7624
    @timproby7624 5 місяців тому +1

    [AE 512] The clear distinction and purpose between unconstrained and constrained optimization is excellent

  • @edwardmau5877
    @edwardmau5877 5 місяців тому +1

    [AE 512] Thanks for going in depth and defining every variable, makes it easier and much more clear to follow. I also now understand the explicit differences between constrained and unconstrained optimization, you showed how to use both in order to utilize the efficiencies of both.

  • @Gholdoian
    @Gholdoian 4 місяці тому

    AE 512: Wow such a powerful yet simple way to reframe optimization routines to use basic optimization schemes.

  • @darylfishback-duran3580
    @darylfishback-duran3580 4 роки тому +5

    This was a fantastic video. I worked within MATLAB alongside the video and it was great to see all the ideas come together into the final plot showing the two constraints and the numerical minimum. The explanations are always clear and concise. Looking forward to the next ones!

    • @ChristopherLum
      @ChristopherLum  4 роки тому +2

      Hi Daryl, I'm glad you liked it. Let me know what you think about the next few videos as well since they are going to build on this and use it to finally get our RCAM model flying the way we want it to.

  • @milesrobertroane955
    @milesrobertroane955 7 місяців тому

    AA516: All of the Matlab visualizations were so helpful in understanding how the distance from the constraint impacts how much it is attracted in that direction!

  • @Kumky605
    @Kumky605 7 місяців тому

    AA516: I have gone over optimization several times in my education and struggled through it at times. This video helped clear up a lot of confusion

  • @koshiroyamaguchi9613
    @koshiroyamaguchi9613 2 роки тому +2

    AA516: I have had vaguly understood constraind optimization ideas before, but this video cleared my understanding so much better. Thank you Prof. Lum!

  • @AlexandraSurprise
    @AlexandraSurprise 7 місяців тому +1

    AA516: Allie S, THIS IS SO COOL! This type of mathematical manipulation is exactly what enticed me to go into math and engineering in the first place. I'm so excited to see the following videos!!

    • @ChristopherLum
      @ChristopherLum  7 місяців тому +1

      Optimization is one of the coolest math topics. Feel free to check out the other videos if you are interested.

  • @rowellcastro2683
    @rowellcastro2683 7 місяців тому

    AA516: These penalty functions are very nice and simple to implement in matlab using fminsearch. Thanks for the lecture Professor.

  • @akshaymishra2918
    @akshaymishra2918 9 місяців тому

    One of the BEST videos to understand the topic.

  • @yaffetbedru6612
    @yaffetbedru6612 7 місяців тому

    AA516: The visuals helped tons in my understanding of the constraints and their solutions.

  • @chayweaver.2995
    @chayweaver.2995 4 місяці тому

    AE512: This is a very cool visualization and new way of looking at constrained vs. unconstrained optimization.

  • @mayfu6508
    @mayfu6508 2 роки тому

    this is just amazing, can't express how grateful i am.

    • @ChristopherLum
      @ChristopherLum  2 роки тому

      Hi May,
      Thanks for the kind words, I'm glad you enjoyed the video. If the find the these videos to be helpful, I hope you'll consider supporting the channel via Patreon at www.patreon.com/christopherwlum. Given your interest in this topic, I'd love to have you a as a Patron as I'm able to talk/interact personally with all Patrons. Thanks for watching!
      -Chris

  • @justinhendrick3743
    @justinhendrick3743 4 роки тому +6

    Thanks for the lecture, Professor! One bit of constructive feedback. The audio is much louder while you're at the board than at the computer. Maybe doing some balancing of the loudness while editing the video together would help.

    • @ChristopherLum
      @ChristopherLum  4 роки тому +2

      Hi Justin, thanks for the feedback. I'll look into this. I think the mic I use for the computer recordings is cleaner with less background noise which causes some of the issue in perceived loudness. How much of a volume difference do you perceive? Is it just this video or do others exhibit similar behavior?

  • @underlecht
    @underlecht 11 місяців тому

    You make solving uninteresting problem the most interesting thing i've seen on UA-cam so far this evening.

    • @ChristopherLum
      @ChristopherLum  11 місяців тому +1

      Thanks for the kind words, I'm glad it was entertaining and thanks for watching!

  • @manitaregmi6932
    @manitaregmi6932 2 роки тому +1

    AA 516 - Another great lecture, I like how you use both Mathematica and Matlab together along with your lecture to explain the material.

  • @davidtelgen8114
    @davidtelgen8114 5 місяців тому

    AE 512: Great explanation, excited to use this on RCAM

  • @WalkingDeaDJ
    @WalkingDeaDJ 4 місяці тому

    Jason-AE512: This video appears to be a useful resource for understanding how to transform optimization problems, potentially valuable for students and professionals in fields like operations research or applied mathematics.

  • @zaneyosif
    @zaneyosif 5 місяців тому

    AE512: Interesting to think about the differences between unconstrained and constrained. Depending on the alpha that is chosen, it looks like you can get a relatively similar solution to the constrained problem. I'm curious if there is a specific reason why we would choose to implement a penalty function/unconstrained optimization rather than an optimization? Is it just simply easier to solve (numerically)? Great video!

  • @disturbed_singer2758
    @disturbed_singer2758 4 роки тому +1

    thank you for the lecture. the video was very helpful. keep up the good work. thanks again

  • @petermay6090
    @petermay6090 7 місяців тому

    AA516: Useful and concise, thank you!

  • @esanayodelebenjamin6875
    @esanayodelebenjamin6875 3 роки тому

    Thank you for this video sir. It was really helpful for me.

  • @jia-hueiju264
    @jia-hueiju264 4 роки тому

    great video!
    it's really really clear!
    Thanks, hope to see more great optimization lectures

  • @nikitatraynin1549
    @nikitatraynin1549 3 роки тому

    Thank you! Great explanation and great video. Make sure to check your volume levels though as when you are screen sharing with matlab or mathematica the volume is much lower then when you are using the whiteboard.

  • @Mike-w6b4i
    @Mike-w6b4i 2 роки тому

    Thanks for the lecture! It helps me a lot in my research through MOP!

  • @fanghsuanhsu7008
    @fanghsuanhsu7008 4 роки тому +2

    Thank for your Lecture,it is really helpful~

    • @ChristopherLum
      @ChristopherLum  4 роки тому

      You're very welcome, there are several other similar videos on the channel. Please feel free to check them out and let me know what you think. Thanks for watching!

  • @bingxinyan8103
    @bingxinyan8103 2 роки тому +1

    It is beneficial for me to understand how to covert the constrained opt problem into un-constrained ones. And very helpful with those implementations on Matlab. In applying cubic spline regression in engineering, I found a lot of papers using the penalty function to avoid overfitting or using the integrated square second derivative cubic spline penalty. I am confused about adding the "avoid overfitting" penalty and why I chose that form penalty. Would it be possible to give us a video about those? Plus, would it be possible to provide us with a video about the implementations using python? Whether it is possible or not, I've learned a lot from this video. Again, thank you very much.

  • @alijudi5103
    @alijudi5103 3 роки тому

    Great explanation. Many thanks for the effort.

  • @darksufer
    @darksufer 3 роки тому

    this video was helpful to understand more about optimization applications.

  • @aijazsiddiqui1721
    @aijazsiddiqui1721 2 роки тому

    Thank you for the video. Could you please share the notes which you are referring during the entire lecture. It would be quite helpful.

    • @ChristopherLum
      @ChristopherLum  2 роки тому

      Hi Aijaz,
      Thanks for the kind words, I'm glad you enjoyed the video. If you find these videos helpful, I hope you'll consider supporting the channel via Patreon at www.patreon.com/christopherwlum or via the 'Thanks' button underneath the video. Given your interest in this topic, I'd love to have you a as a Patron as I'm able to talk/interact personally with all Patrons. I can also answer any questions and provide code/downloads on Patreon. Thanks for watching!
      -Chris

  • @tharunsankar4926
    @tharunsankar4926 4 роки тому +2

    great vid professor!

    • @ChristopherLum
      @ChristopherLum  4 роки тому

      Thanks Tharun. Standby for the last video that will actually get us trimming our aircraft model using this technique.

  • @milesbridges3547
    @milesbridges3547 Рік тому

    AA 516: This lecture really helped me better understand the power of optimization. What is the purpose of changing a constrained optimization problem to an approximate unconstrained problem using the penalty functions. Is the unconstrained problem just easier to solve numerically?

    • @ChristopherLum
      @ChristopherLum  Рік тому +1

      In general, yes, unconstrained is much easier than constrained. In particular, fminsearch is unconstrained.

  • @burningbush2009
    @burningbush2009 2 роки тому +1

    AE512: Thanks for the video Professor! Is there ever an advantage to using a higher order term for the unconstrained part of fhat? ie fhat = f0 + alpha*f1^4 or similar?

    • @ChristopherLum
      @ChristopherLum  2 роки тому

      You could if you want to penalize more aggressively as the 4th grows faster than the 2nd term.

  • @aimeepak717
    @aimeepak717 5 місяців тому +1

    AE512: I can see why having properly defined constraints is important to finding the approximately equivalent unconstrained optimization problem.

  • @anilcelik16
    @anilcelik16 4 роки тому +2

    Thank you for the videos. Is it possible to share Matlab codes?

  • @cupdhyaya
    @cupdhyaya 2 роки тому

    Please discuss particle swarm for constrained optimization.

  • @sanjaykrkk
    @sanjaykrkk 3 роки тому +1

    Thank for the lecture Professor Lum. For a large problem where we cannot compare approximate solution with actual solution, how to decide range for alpha values? As pointed out in one of the comments, it seems larger alpha is better. Is there any issue with that?

    • @jarekwatroba2663
      @jarekwatroba2663 3 роки тому +1

      The issue is that there is a trade-off. The larger you make alpha, the higher your optimal function value will be, which is undesirable as you are looking for a minimum value. Your goal is to minimize the function given soft constraints. The closer you pull it to the constraint the higher your end result since the 1D parabola doesn't coincide with the local function minimum. Also, imagine if the function has "sharp" turns, ie. highly non-linear or you have many more variables. By imposing very high alpha, beta etc values, you potentially miss out on super optimized solutions which exist if you are willing to relax your constraints just a little bit. That's why he's checking an entire range of alpha. It makes more sense in higher dimensions and or more non-linear applications than just a 2D quadratic.

  • @idea9423
    @idea9423 2 роки тому +1

    Thank you ☺️

  • @alirtha2020
    @alirtha2020 3 роки тому +1

    شكرا جزيلا اتمنى لك التوفيق
    لديه بحث لخصوص التحسين المقيد وغير المقيد ممكن مساعدة

  • @willpope3151
    @willpope3151 3 роки тому +1

    [AA 516] One of my favorite lectures, I was surprised at how simple the penalty method is to implement. Is there a reason the original cost function is written using matrices and transpose vectors? I wasn't sure if that was something unique to optimization.

    • @ChristopherLum
      @ChristopherLum  3 роки тому

      You don't have to use matrices and vectors, I (and other people in the optimization field) like to write it like this so I stuck with the standard convention.

  • @zhikunzhang8210
    @zhikunzhang8210 4 роки тому +3

    Hi, Professor Lum, I am wondering how to choose the penalty parameters alpha for the penalty functions in practice? In the example, it seems that the larger alpha is better.

    • @dboozer4
      @dboozer4 2 роки тому

      Start with a small value to lessen the sharp edge created and then increase with each iteration.

  • @Colin_Baxter_UW
    @Colin_Baxter_UW 7 місяців тому

    AA516: I see how using the multiple cost function constraints will translate over into setting constraints for different variables in our RCAM model, like roll angle, pitch angle, etc.

  • @AJ-et3vf
    @AJ-et3vf Рік тому

    Great video. Thank you

    • @ChristopherLum
      @ChristopherLum  Рік тому

      Hi AJ,
      Thanks for the kind words, I'm glad you enjoyed the video. If you find these videos helpful, I hope you'll consider supporting the channel via Patreon at www.patreon.com/christopherwlum or via the 'Thanks' button underneath the video. Given your interest in this topic, I'd love to have you a as a Patron as I'm able to talk/interact personally with all Patrons. I can also answer any questions, provide code, notes, downloads, etc. on Patreon. Thanks for watching!
      -Chris

  • @arveanlabib5333
    @arveanlabib5333 2 роки тому

    [AA 516] Great lecture! Would increasing the penalty parameter always improve the accuracy of the final converged value? If so, what is the point of using small penalty parameters?

    • @ChristopherLum
      @ChristopherLum  2 роки тому

      Arvean, great question. Not always. You need to make the penalty parameters relative to the magnitude of the constraints. Let's chat more at office hours and I can more fully explain.

  • @bsgove
    @bsgove 4 місяці тому

    AE512: it's interesting that the teaching of optimization so often involved optimization functions of 1 to 2 dimensions, I imagine this is because it's really hard to visualize optimization problems of higher dimensions... I wonder if people have tried visualizing beyond 3 dimensions somehow.

  • @paramjeetkaur9208
    @paramjeetkaur9208 3 роки тому

    great explanation sir.. can u tell value of alpha1 and alpha2.. and plz explain the matlab coding for this

  • @reesetaylor3506
    @reesetaylor3506 5 місяців тому

    AE 512: Interesting technique for optimatization. How would the inequality penalty function at 37:30 change if instead the constraint expression was f_i(x) >= 0 or just f_i(x) > 0? Won't the implementation using the max function fail to mimic this constraint properly?

  • @boeing797screamliner
    @boeing797screamliner 3 роки тому +1

    AA516 - Great lecture as usual!

  • @mayfu6508
    @mayfu6508 2 роки тому

    thank you so much!

  • @chadigaali7680
    @chadigaali7680 3 роки тому

    thank you very much

  • @alexzhen179
    @alexzhen179 3 роки тому

    AA516: Great lecture! I got a question about alpha. Seems like the solution converges to the optimized solution as alpha increases. Does it mean we can directly set alpha to infinity (analytically) or a very large number (numerically) to get the answer? How do we know that alpha is large enough that can get an approximately optimized solution? Another question is about when the are multiple constraints. In the example, alpha1 and alpha 2 have the same value. Is it always this case? If not, how do we weigh different alpha for different penalties?

    • @ChristopherLum
      @ChristopherLum  3 роки тому

      Alex, all good questions, let's talk at office hours as this is probably easier to go over in person.

    • @mohamedelgamal6333
      @mohamedelgamal6333 3 роки тому

      Would you Alex brief us about the answer you got for your a/m questions?

  • @Js_vici
    @Js_vici 3 роки тому

    AA 516 - Thank you for the video! I am wondering why do we use x0 for all iterations? Can we instead use xhatstar values after the first iteration?

    • @ChristopherLum
      @ChristopherLum  3 роки тому +1

      Chris, let's chat at office hours, it might be easier to talk about over Zoom.

  • @princekeoki4603
    @princekeoki4603 7 місяців тому

    AA516: Whats the penalty for the designer of they choose an exceeding large value of alpha?

  • @PatrickGalvin519
    @PatrickGalvin519 7 місяців тому

    AA516: If it's known that the solution to the constrained problem exists, when it's converted into an unconstrained problem are there any drawbacks to just cranking alpha up to something like 1e6 to try to get very close to the exact solution?

  • @priyankadoiphode5
    @priyankadoiphode5 4 роки тому

    Sir !! I just wanted to ask whether Static Optimization is Unconstrained optimization???

  • @hasanhorata8381
    @hasanhorata8381 2 роки тому

    AA 516 - Is there a reason why we square the penalty functions?

    • @ChristopherLum
      @ChristopherLum  2 роки тому

      Hasan, great question. Yes, if we didn't square them then negative values would actually decrease the cost function and the optimizer would be incentivized to chose large negative values. Squaring the values gets around this.

  • @tilio9380
    @tilio9380 3 роки тому

    AA 516 This is a minor issue, but the last time stamp is incorrectly labeled.

    • @ChristopherLum
      @ChristopherLum  3 роки тому

      Tim, thanks for catching this, I've updated it, does this look correct now? Please let me know if you find any other inconsistencies, thanks!

  • @knighttime19
    @knighttime19 3 роки тому

    I have tried same principle for my problem, but didn't work unless one of the alphas was negative. Any comment is appreciated.

  • @ojasvikamboj6083
    @ojasvikamboj6083 Рік тому

    A A 516: Ojasvi Kamboj

  • @ravinpech5220
    @ravinpech5220 2 роки тому

    Can I ask for code teacher?

    • @ChristopherLum
      @ChristopherLum  2 роки тому

      Hi,
      Thanks for reaching out. This is a benefit I provide to supporters on Patreon at www.patreon.com/christopherwlum. I'd love to have you as a Patron as I'm able to talk/interact personally with Patrons. Thanks for watching!
      -Chris

  • @anilcelik16
    @anilcelik16 4 роки тому

    Gradient descent and stochastic gradient descent algorithms with real life applications would be very helpful I guess

    • @mohamedelgamal6333
      @mohamedelgamal6333 3 роки тому

      I appreciate it if you could share a link to read more about Gradient descent and stochastic gradient descent algorithms and how to apply them in real-life applications. Many thanks

  • @kisitujohn6817
    @kisitujohn6817 16 днів тому

    Kisitu john

  • @rowellcastro2683
    @rowellcastro2683 7 місяців тому

    AA516: 12:11 Is that Veritasium lol

  • @aaroncapozella5365
    @aaroncapozella5365 7 місяців тому

    AA516

  • @shavykashyap
    @shavykashyap 7 місяців тому

    AA 516

  • @Po-ChihHuang
    @Po-ChihHuang 7 місяців тому

    AA516:Po

  • @esanayodelebenjamin6875
    @esanayodelebenjamin6875 3 роки тому

    Thank you for this video sir. It was really helpful for me.