Supervised Learning and Support Vector Machines

Поділитися
Вставка
  • Опубліковано 2 лис 2024

КОМЕНТАРІ • 7

  • @Enem_Verse
    @Enem_Verse 3 роки тому +4

    You are one of the greatest teacher of mankind

  • @anantchopra1663
    @anantchopra1663 4 роки тому +3

    What does the constraint imply geometrically in the linear SVM optimization problem?

  • @billykotsos4642
    @billykotsos4642 4 роки тому +2

    These videos are treasure.

  • @siranguru
    @siranguru 4 місяці тому

    Thank you for the lecture
    Can anyone explain the reason behind the other factors apart from the loss function in the optimization equation?
    why do we need to reduce the distance of the line or the hyperplane center from the axis central point? -> ||w||^2
    and what is the 'subject to' condition? how did it come by or what is its purpose? why should the points be parallel to the SVM line ( assuming dot product) correct me on this if it is wrong

    • @JiaheWang-f4d
      @JiaheWang-f4d Місяць тому +1

      ||w||^2 actually is the inverse of distance(magnitude), which is not mentioned in the lecture. You can find other explanation from internet.

  • @JiaheWang-f4d
    @JiaheWang-f4d Місяць тому +1

    Does anyone can explain the subject to min|xj.w|=1, why it looks like in this formula? 16:03

    • @siranguru
      @siranguru Місяць тому

      We have the main line as W.X +b =0 and for the margins we will have the lines as W.X +b = +- k. On normalizing by k we will have W.X + b = +-1. So when y = +1 for green dots and -1 for the magenta dots and to have all points in the corresponding direction we will have the constraint y(W.X+b) >=1
      To give some lenience and not have a hard bound we will have W.X +b = 1 - ζ where ζ is Zeta. if ζ = b then W.X = 1. Which makes the constraint y(W.X) >= 1 i.e. the minimum of W.X = 1 since y = 1
      Correct me if I am wrong