Lec-16 Perceptron Convergence Theorem

Поділитися
Вставка
  • Опубліковано 15 лис 2024

КОМЕНТАРІ • 9

  • @digu35
    @digu35 3 роки тому +2

    What an amazing lecture !!! Flawless !! Thank you professor

  • @nitishchaudhary4680
    @nitishchaudhary4680 2 роки тому

    best lecture on pct

  • @chingkui
    @chingkui 12 років тому +3

    At 47:00, an upper bound beta is defined. There is this problem though that beta might not be finite, as H1 might be unbounded. I don't see how the proof can actually work given this.

    • @matejp3135
      @matejp3135 8 років тому

      I think that a finite training set H silently is assumed. In that case, the solution for you problem is the following. If |H| = {h0, h1, ..., hA}, then x0 = h0, ..., xA = hA, x(A + 1) = h0, x(A + 2) = h1 ... i.e. you cycle the training examples. However, he did not prove the theorem, because he ignored the wrongly classified negative examples ...

  • @JosueHuaman-oz4fk
    @JosueHuaman-oz4fk 6 років тому +1

    but how do you actually know that a very large value of n, n times beta is going to decrease? and the other term is going to increase

    • @RedPillDS
      @RedPillDS 4 роки тому

      Very large value of n means that the number of iterations are large enough for convergence to occur. Thus, a large value of N will ensure that we can actually reach convergence at which both the extremes will start to come closer (lower bound will increase and upper bound will decrease) and the bounds become equal.

  • @koradakarthik3553
    @koradakarthik3553 6 років тому +1

    i think last formula he derived is wrong
    n0>=nmax
    then only w(n0)=w(n0+1)=w(n0+2)=------
    am i correct?

    • @MrParthab
      @MrParthab 4 роки тому

      If n0>=nmax the perceptron shall diverge. I think derivation shown here is correct i.e n0

  • @amitabhachakraborty497
    @amitabhachakraborty497 2 роки тому

    Lecture is good but very less intuitive just covering Haykin book word by word.