Perceptrons: The Building Blocks of Neural Networks

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 49

  • @yuckymoose6745
    @yuckymoose6745 3 роки тому +5

    I was so confused about the math and looking for solution all morning and now I found it and I understand clearly how it works now. Thanks a lot !

  • @chevalharrichunder8953
    @chevalharrichunder8953 4 роки тому +4

    Thank you Jacob, no fancy presentation etc. Just brilliant explanation on the concept that i need for Machine Learning.

  • @devinvenable4587
    @devinvenable4587 5 років тому +4

    One of the best youtube videos on this topic. Nicely done.

  • @prvizpirizad4336
    @prvizpirizad4336 Рік тому

    the video I have been looking for! thank you very much!

  • @SMJ-Majidi
    @SMJ-Majidi 3 роки тому

    Absolutely astonishing ! this is the first time i understand without skipping !

  • @matattz
    @matattz 6 років тому +10

    THIS IS JUST PHENOMENAL :) Thank you so much thats what i have been searching the whole day now i get it !!

  • @kadeeraziz
    @kadeeraziz 4 роки тому +3

    Now I got how perceptron works. thank you!!!!

  • @subramaniamsrivatsa2719
    @subramaniamsrivatsa2719 4 роки тому

    fluent explanation of comlex mathematical concepts - without missing out on the details .

  • @mohamedelkayal8871
    @mohamedelkayal8871 5 років тому +2

    Your videos have helped me on more than one occasion and for that I humbly thank you for your effort.

  • @RLDacademyGATEeceAndAdvanced
    @RLDacademyGATEeceAndAdvanced 2 роки тому

    Nice presentation.

  • @OriginalJoseyWales
    @OriginalJoseyWales 5 років тому

    You are very smart and knowledgeable.

  • @cr4zyg3n36
    @cr4zyg3n36 4 роки тому

    Please make more!!! Great Videos!

  • @Harish-ou4dy
    @Harish-ou4dy 4 роки тому

    Is there a theorem which says the weights and biases will eventually make correct predictions for small alpha and linearly separable data?

  • @128mtd128
    @128mtd128 3 роки тому

    can make a math course vectors from basic calculation up to this stuff i dont understand vectors and
    the e3

  • @PrakashSingh-bs2qv
    @PrakashSingh-bs2qv 3 роки тому

    Great explanation.

  • @VikasSingh-tc2pe
    @VikasSingh-tc2pe 4 роки тому

    How do we update the bias in the last example?

  • @paristonhill2752
    @paristonhill2752 4 роки тому

    When you were cycling through the inputs to update the weights, only third inputs were predicted correctly, will the algorithm come back to the inputs that it couldn't predict correctly? If yes then at what stage?

  • @aslaydnlar9663
    @aslaydnlar9663 2 роки тому

    amazing

  • @pareshb6810
    @pareshb6810 4 роки тому

    Great work!

  • @diptanshude2525
    @diptanshude2525 3 роки тому

    Great Videoooo!!!!!

  • @sriramswaminathan1502
    @sriramswaminathan1502 5 років тому

    excellent explanation

  • @ohmakademi
    @ohmakademi 5 років тому

    thank you very much. This is very useful tutorial

  • @adeelahmad9875
    @adeelahmad9875 4 роки тому

    How do we evaluate target, omegas and learning rate?

  • @cr4zyg3n36
    @cr4zyg3n36 4 роки тому

    Thanks for this clear explanation

  • @ahmedelsabagh6990
    @ahmedelsabagh6990 4 роки тому

    Simple and helpful

  • @OriginalJoseyWales
    @OriginalJoseyWales 4 роки тому

    Why do we introduce a bias unit in the first place?

  • @lanfeima5167
    @lanfeima5167 2 роки тому

    Best!

  • @nevzylka2589
    @nevzylka2589 5 років тому +1

    Thank you so much. This is extremely helpful!

  • @ayoublaouarem3454
    @ayoublaouarem3454 3 роки тому

    In case of multi layer perceptron, we use the same formula: alpha*(t - p(i))

  • @anikethdas1
    @anikethdas1 5 років тому

    Hey Jacob, I'm sorry if i got this wrong but shouldn't the group of points on the top be getting the value 0 instead of 1 and the group below get 1 instead of 0 (At about 12:40)? But i guess you corrected it later.

    • @JacobSchrum
      @JacobSchrum  5 років тому

      Consider the point (0,1000). This is clearly above the line. What value would it have? 0wx + 1000wy + b = 1000*0.5 = 500. a(500) = 1 because 500 is positive, so 1 is the correct classification for points on top. It is possible to set the weights and biases in such a way that flips where 0 and 1 are, but this example is correct.

    • @Bridgelessalex
      @Bridgelessalex 5 років тому

      Why?

  • @Onevideo378
    @Onevideo378 5 років тому

    Very well explained. Thanks a lot!

  • @Pmarmagne
    @Pmarmagne 4 роки тому +1

    Can someone explain me what the x and y axis represent concretely?

    • @JacobSchrum
      @JacobSchrum  4 роки тому

      In this particular example, one of the perceptron inputs is x, and the other is y. The reason we are trying to draw a line (hyperplane) in this space is that we want to have a way of categorizing all possible inputs. The perceptron assigns a class to each possible set of inputs based on which side of the line you end up on. This can be a little bit confusing, but it is even worse in the kinds of high-dimensional spaces where neural networks are typically applied.

  • @ahmadadil1576
    @ahmadadil1576 4 роки тому

    Thanks very helpful

  • @devenjainn
    @devenjainn 5 років тому +2

    Here by watching@ sakho kun

  • @kaushikraghupathrunitechie
    @kaushikraghupathrunitechie 4 роки тому

    Loved it!

  • @miche2105
    @miche2105 5 років тому

    very helpful, thanks

  • @hackein9435
    @hackein9435 3 роки тому

    Good one ;)

  • @rebeccawalker839
    @rebeccawalker839 4 роки тому

    thank you alot

  • @cliffmathew
    @cliffmathew 5 років тому

    Thanks. Helpful.

  • @magnuswootton6181
    @magnuswootton6181 Рік тому

    you cant use a pure step function because you cant propagate the error backward through it!!!

  • @yodarocco
    @yodarocco 4 роки тому +2

    The volume of the voice is damned low

  • @izetassky
    @izetassky 5 років тому

    i think its not clear what did you did from @22:00

    • @JacobSchrum
      @JacobSchrum  5 років тому +1

      alpha*(t - p(i)) = 0.1*1. w = (0,0,0) and i = (1,1,1), so w + alpha*(t - p(i))xi = (0,0,0) + 0.1*(1,1,1) = (0,0,0) + (0.1,0.1,0.1) = (0.1,0.1,0.1)

  • @pmtk2055
    @pmtk2055 5 років тому

    Great video. BTW you sound like Mark Zunckerberg

    • @JacobSchrum
      @JacobSchrum  4 роки тому

      I don't think that's a compliment.

  • @olatunjifelix2102
    @olatunjifelix2102 4 роки тому

    After 21 minutes, everything becomes confusing