Deriving the KKT conditions for Inequality-Constrained Optimization | Introduction to Duality

Поділитися
Вставка
  • Опубліковано 21 жов 2024

КОМЕНТАРІ • 17

  • @MachineLearningSimulation
    @MachineLearningSimulation  3 роки тому +4

    Error: In the intro and also later in the video I call the Karush-Kuhn-Tucker conditions "Karuhn-Kush-Tucker conditions" (the names of the first two researchers is somehow mixed). My head must have been confused.
    Anyhow, I corrected this in the pdf version on GitHub: raw.githubusercontent.com/Ceyron/numeric-notes/main/english/math_basics/kkt_conditions_derivation.pdf

  • @themathguy3149
    @themathguy3149 3 роки тому +1

    I sincerely look around 10 videos all of which were about 30 minutes long and yours was the only one that made sense intuitively
    for me, thanks!

  • @domenicoscarpino3715
    @domenicoscarpino3715 9 місяців тому +1

    This is the best explanation I have seen about this topic. Thank you so much! Do you also have a full lesson on support vector machine?

    • @MachineLearningSimulation
      @MachineLearningSimulation  9 місяців тому +1

      You're very welcome. Thanks for the kind words :)
      Unfortunately, there is nothing on SVM yet. I wanted to come back to it for a long time but at the moment I focus mostly on stuff relevant to my PhD; maybe in the far future there will finally be a video 😅

    • @domenicoscarpino3715
      @domenicoscarpino3715 9 місяців тому +1

      @@MachineLearningSimulation ah ok sure. It would be awesome if you do that. Thanks again!

  • @sfdv1147
    @sfdv1147 Рік тому +1

    Thanks for the awesome video. However, how we find the minimum x* after deriving the KKT condition (we derived it at 25:16)?

    • @MachineLearningSimulation
      @MachineLearningSimulation  Рік тому +1

      Thanks for the comment 😊
      The KKT are just conditions that x* has to fulfill. There are many algorithms to approximately find the optima like the "interior point method", that build upon the KKT. Usually, they use these conditions to then iteratively solve a system of (non-)linear equations.

  • @lidarrsrs9532
    @lidarrsrs9532 2 роки тому +2

    Thanks for this great video. I got everything except at 21 min 31 sec: To me, it is not apparent that x* is the optimal x minimizing f(x) +(u*) g(x). Can you help explain a little bit? Thanks.

    • @MachineLearningSimulation
      @MachineLearningSimulation  2 роки тому +2

      Hey, thanks for the feedback and the great question. :)
      Great that you included the timestamp, that helped me navigate the video, since it has also now been some time since I uploaded the video.
      Regarding your question: I can understand your confusion. I also had to watch the video a bit again to remember what I meant. The fact that x* is the optimal argument there can be deduced from the equality. Since we already have the optimal u*, we know it has to fulfill the constraint on the dual problem, which makes u*>=0. Then, since any g(x)

    • @lidarrsrs9532
      @lidarrsrs9532 2 роки тому +2

      @@MachineLearningSimulation Wow, a great response. I can't say nothing but a big thanks for your timely and patient explanation. I got it now.
      Here I jotted down what you said:
      f(x*)=p*=q*= phi(u*)
      = min: f(x)+u*.g(x)

    • @MachineLearningSimulation
      @MachineLearningSimulation  2 роки тому

      ​@@lidarrsrs9532 Amazing! :) I am super glad I could help.
      Also, nice that you put it down. I can imagine this to be of great help for others who might be struggling on the same point.

    • @Funda1215
      @Funda1215 2 роки тому

      @@lidarrsrs9532 this is awesome summary, thanks a lot

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому +1

    Is there an example when this is used in machine learning?

    • @MachineLearningSimulation
      @MachineLearningSimulation  3 роки тому +1

      @C Indeed, there are, you could even go as far as saying that most of Machine Learning is actually (inequality-)constrained optimization.
      The most prominent example here is the Support-Vector Machine. For this ML technique we heavily rely on transforming the optimization problem into its dual form in order to be able to solve.
      Videos on Support Vector Machines (SVM) will probably start in one month, depending on how fast I make progress on my current topics :D Stay tuned, I'm excited

  • @tr233
    @tr233 Рік тому

    This so hard to understand, only math graduate can grasp whats going, fck we computer scientist just shows us the algo and we be done in a second......

    • @MachineLearningSimulation
      @MachineLearningSimulation  9 місяців тому

      I can understand, optimization theory is a hard subject to wrap your head around. Though, I can recommend digging into it. It is very helpful and builds the basis for many modern applications, most prominently of course, Machine Learning.