Understanding Perceptron Update Rule

Поділитися
Вставка
  • Опубліковано 15 лис 2024

КОМЕНТАРІ • 11

  • @e555t66
    @e555t66 Рік тому +3

    I am doing SDS offered by MIT on edx and this is very helpful.

  • @rashmi727
    @rashmi727 Рік тому +2

    Awesome lecture !

  • @GyanPrakash-c7c
    @GyanPrakash-c7c 18 днів тому

    Great lecture 🙌🙌

  • @rituparnodhar6265
    @rituparnodhar6265 2 місяці тому

    Why are we adjusting (and assuming) so much for this Perceptron thing? Can't we just reject it?

  • @triton62674
    @triton62674 Рік тому +2

    Fantastic video, thanks for the help!

  • @Shrikant_Anand
    @Shrikant_Anand Рік тому

    Why was the corner case example regarded as not allowed under the assumption of linear separability with gamma margin? We can see clearly that the corner case data set is linearly separable with some gamma margin but then the way we took our initial weight vector which lead to points on the decision boundary caused the perceptron algorithm to not converge. Are you trying to say that for every weight vector we encounter in the perceptron algorithm update rule process including the initial weight vector we took should have no point lying on it, then only perceptron algorithm will converge?

  • @vaibhavsingh7804
    @vaibhavsingh7804 2 роки тому +1

    At 30:06 , x: wTx = -γ , So does it means that γ and -γ are equidistance from center?

  • @aryastark5852
    @aryastark5852 Рік тому

    While solving W1, shouldn't the label y3 be equal to 1? Why did he take y3 = -1?

    • @ashirvadpawar168
      @ashirvadpawar168 3 місяці тому

      Because the actual label for data point x3 is (-1).
      Y_hat 3 = +1 is mistake made by w0.