Dual Ascent, Dual Decomposition, and Method of Multipliers

Поділитися
Вставка
  • Опубліковано 28 вер 2024
  • A very short introduction of dual ascent, dual decomposition, and method of multipliers for optimization. I followed Chapter 2 of Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers by Boyd et al. I planned the screencast in three segments but I pasted them together eventually. I kept on saying that it was mentioned last time but it was actually just mentioned earlier in this video.

КОМЕНТАРІ • 7

  • @BSplitt
    @BSplitt Рік тому

    Great video. Really appreciate it. But I do have a question. How is alpha^k chosen at each iteration? Or are there various ways to determine the step size?

  • @chenzhao3686
    @chenzhao3686 7 років тому

    thank you for this video,I have learn a lot

  • @k-bala-vignesh
    @k-bala-vignesh Рік тому

    why do we need to find a y that maximizes the inner term? i.e. why should we maximize the lagrangian multiplier term?

    • @ousam2010
      @ousam2010  Рік тому

      do you mean the beginning? this is a trick to absorb the constraint. When we rewrite the original constrained optimization problem as a min-max problem, the inner optimization (maximizing y) will ensure that the value is infinite (and hence can never be chosen by the outer minimization) when the constraint is violated.

    • @k-bala-vignesh
      @k-bala-vignesh Рік тому

      @@ousam2010 yes, makes sense. I did a bit of reading about the Dual Problem. thank you.

  • @shahriarshahabuddin5908
    @shahriarshahabuddin5908 9 років тому

    Do you have lecture slides of this video?

  • @victor61377
    @victor61377 2 роки тому

    dope vid