Neural Networks: Multi-Layer Perceptrons: Building a Brain From Layers of Neurons

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 28

  • @ajith.studyingmtech.atbits1512
    @ajith.studyingmtech.atbits1512 3 роки тому +1

    Very crisp simple explanation of neural network - you made a great foundation for me to build the skyscraper. Thanks a lot.

  • @Carlosdanielpuerto
    @Carlosdanielpuerto 3 роки тому +2

    Incredibly clear explanation of the concepts! Thanks a lot

  • @vihaankadiyan9996
    @vihaankadiyan9996 4 роки тому +3

    best content that i can find on internet regarding MlP

  • @robthorn3910
    @robthorn3910 6 років тому +15

    I think it's worth mentioning that the magic of back propagation is the "chain rule".

    • @sahajshukla
      @sahajshukla 6 років тому

      true, I totally agree. The gradient descent approach , as a whole is indeed very fascinating :)

  • @jayjayDrm
    @jayjayDrm 5 років тому +3

    21:59 this is the intuitive explanation for backprop i was looking for! if you know the weightchange to reduce the error, then simply do this change to the output of the predecessor instead of to the the weight. it's the same result. because of you know the calculus from the predecessor from the forward propagation, you can give the change through to the inputs of the predecessor. it's hard to explain, but i hope i got it.

  • @tusharsingh7438
    @tusharsingh7438 4 роки тому +3

    This is the best video on MLP

  • @retskcirt69
    @retskcirt69 3 роки тому

    Excellent video, very well explained

  • @hackein9435
    @hackein9435 3 роки тому

    Finally I found it after 2 days of search

  • @abdullhaseeb4157
    @abdullhaseeb4157 3 роки тому +1

    how did you solve the h1 and h2 I couldn't get my head around that math, for the x in sigmoid which value of x did you use? 0 or 1? and also for the values of x and y what are the values

    • @JacobSchrum
      @JacobSchrum  11 місяців тому

      Refer to my video on simple perceptrons: ua-cam.com/video/aiDv1NPdXvU/v-deo.html

  • @debasismohanty7552
    @debasismohanty7552 5 років тому +1

    Can the neural network be used in stock data analysis

  • @SEOMEDIABOT
    @SEOMEDIABOT 4 роки тому

    Well explained. Simple english

  • @saurabh1chhabra
    @saurabh1chhabra 4 роки тому +2

    I'm happy to be the 10kth subscriber

  • @yasincoskun4400
    @yasincoskun4400 2 роки тому

    Thanks a lot

  • @sgrimm7346
    @sgrimm7346 2 роки тому

    Self-Organizing Multi Linear networks......No backprop, no calculus, no bias wt. and in most networks, no special activation function. Most of my networks that I build use this method.

  • @funlearninge-tech4392
    @funlearninge-tech4392 5 років тому +1

    come on backpropagation is not that complex just try to add it

  • @lisali6205
    @lisali6205 2 роки тому

    genius

  • @adhirajmajumder
    @adhirajmajumder 4 роки тому +1

    You're just like Andrew ng lite ....

  • @turbolader6734
    @turbolader6734 4 роки тому

    Best

  • @FerMJy
    @FerMJy 6 років тому +3

    22:42 you don't know or you truely believe what you are saying?
    because that's the most important part of the neural networks... if you don't know how to backprop you are doomed....
    and i'm looking for 1 explanation where they explain how it works when the previous layer has more than 1 neuron...

    • @sahajshukla
      @sahajshukla 6 років тому

      hi Fernando, I understand your anger. But the thing is, you have a set of labels already available to you since this is supervised learning algorithm. So, you can update the weights as Wnew = Wold+a(labels-y)x, where y is the obtained output and a is the learning rate. this is true for all the values of the weight on a certain node. I believe he didnt mention it because its quite a universal rule. You always use the gradient descent or delta learning rule. I hope this helps, cheers :)

    • @sahajshukla
      @sahajshukla 6 років тому

      if the previous layer has more than one nodes, you take one output node and work on it like a separate adaline node. This can further be performed on the previous layer nodes too. Since the weight updates follow a backward path, it is called backpropagation :)

    • @FerMJy
      @FerMJy 6 років тому

      @@sahajshukla no you don't... you have to derivate the calculation of the wighted sum...

    • @sahajshukla
      @sahajshukla 6 років тому

      true thats the calculation part. You actually derivate the the error to update weights and biases. Thats exactly what i just said. It's called delta learning. The fact that weight updation goes from nth layer to n-1th layer and so on, upto the weights between the input layer and first hidden layer

    • @sahajshukla
      @sahajshukla 6 років тому

      @@FerMJy The error actually follows a parabolic path for gradient descent. The equation is Error^2 = (y-xi)^2, which is a parabolic equation. So, in order to minimise this equation, you take the tangent to that point. This process itself, is called the gradient descent. You do it for all weights separately. Or, i tends from 1 to n