Backpropagation in Convolutional Neural Networks (CNNs)

Поділитися
Вставка
  • Опубліковано 31 лип 2024
  • In this video we are looking at the backpropagation in a convolutional neural network (CNN). We use a simple CNN with zero padding (padding = 0) and a stride of two (stride = 2).
    ► SUPPORT THE CHANNEL
    ➡ Paypal: www.paypal.com/donate/?hosted...
    These videos can take several weeks to make. Any donations towards the channel will be highly appreciated! 😄
    ► SOCIALS
    X: x.com/far1din_
    Github: github.com/far1din
    Manim code: github.com/far1din/manim#back...
    ---------- Content ----------
    00:00 - Introduction
    00:51 - The Forward propagation
    02:23 - The BackPropagation
    03:31 - (Intuition) Setting up Formula for Partial Derivatives
    06:07 - Simplifying Formula for Partial Derivatives
    07:05 - Finding Similarities
    08:55 - Putting it All together
    ---------- Contributions ----------
    Background music: pixabay.com/users/balancebay-...
    #computervision #convolutionalneuralnetwork #ai #neuralnetwork #deeplearning #neuralnetworksformachinelearning #neuralnetworksexplained #neuralnetworkstutorial #neuralnetworksdemystified #computervisionandai #backpropagation

КОМЕНТАРІ • 94

  • @louissimon2463
    @louissimon2463 Рік тому +9

    great video, but i don't understand how we can find the value of the dL/dzi terms. At 7:20 you make it seem like dL/dzi = zi, is that correct?

    • @far1din
      @far1din  9 місяців тому +4

      No, they come from the loss function. I explain this at 4:17.
      It might be a bit unclear so I’ll highly reccomend you watch the video from 3blue1brown: ua-cam.com/video/tIeHLnjs5U8/v-deo.htmlsi=Z6asTm87XWcW1bVn 😃

    • @rtpubtube
      @rtpubtube 7 місяців тому +4

      I'm with @louissimion, you show how dL/dw1 is related to dz1/dw1+... (etc), but you never show/expain where dL/dz1 (etc) comes from. Poof - miracle occurs here. Having a numerical example would help a lot. This "theory/symbology" only post is therefore incomplete/useless from a learing/understanding standpoint.

    • @mandy11254
      @mandy11254 2 місяці тому +1

      ​@@rtpubtubeIt's quite literally what he wrote. He hasn't defined a loss function so that's just what it is from the chain rule. If you're asking how the actual value of dL/dz1 is computed, the last layer has its own set of weights besides the ones shown in the video, in addition to an activation function. You use that and a defined loss function to compute dL/dzi. It's similar to what you see in standard NNs. If you studied neural networks, you should know this. This is a video about CNNs not an intro to NNs. Go study that before this. It's not his job to point out every little thing.

  • @khayyamnaeem5601
    @khayyamnaeem5601 Рік тому +9

    Why is this channel so underrated? You deserve more subscribers and views.

    • @eneadriancatalin
      @eneadriancatalin Рік тому +1

      Perhaps developers use ad blockers, and as a result, UA-cam needs to ensure revenue by not promoting these types of videos (that's my opinion)

  • @abhimanyugupta532
    @abhimanyugupta532 2 місяці тому +1

    Been trying to understand backpropogation in CNN for years until today! Thanks a ton mate!

    • @yosukesharp
      @yosukesharp 2 місяці тому

      it was obvious primitive algo dude... people like you are being called "data scientists" now, which is really sad...

  • @JessieJussMessy
    @JessieJussMessy Рік тому +2

    This channel is a hidden gem. Thank you for your content

  • @DVSS77
    @DVSS77 Рік тому

    really clear explanation and good pacing. I felt I understood the math behind back propagation for the first time after watching this video!

  • @zemariamm
    @zemariamm 9 місяців тому +3

    Fantastic explanation!! Very clear and detailed, thumbs up!

  • @sourabhverma9034
    @sourabhverma9034 2 місяці тому

    Really intuitive and great animations.

  • @RAHUL1181995
    @RAHUL1181995 Рік тому

    This was really helpful....Thank you so much for the vizualization...Keep up the good work...Looking forward to your future uploads.

  • @nizamuddinkhan9443
    @nizamuddinkhan9443 Рік тому +2

    Very well explanation, I search many videos but no body explained regarding change in filter's weight. Thank you so much for this animated simple explanation.

  • @guoguowg1443
    @guoguowg1443 3 місяці тому

    great stuff man, crystal clear!

  • @bambusleitung1947
    @bambusleitung1947 3 місяці тому

    great job. this explanation is really intuitive

  • @saikoushik4064
    @saikoushik4064 5 місяців тому

    Great Explanation, helped me understand the background working

  • @DSLDataScienceLearn
    @DSLDataScienceLearn 6 місяців тому

    great explanation, clear direct and understandable, sub!

  • @haideralix
    @haideralix 9 місяців тому +2

    I have seen few videos before, this one is by far the best one. It breaks down each concept and answers all the questions that comes in the mind. The progression, the explanation is best

    • @far1din
      @far1din  9 місяців тому

      Thank you! 🔥

  • @ramazanyel5979
    @ramazanyel5979 2 місяці тому

    excellent. the exact video i was looking for.

  • @giacomorotta6356
    @giacomorotta6356 Рік тому

    great video, underrated channel , please keep it up with CNN videos!

  • @pedroviniciuspereirajunho7244
    @pedroviniciuspereirajunho7244 7 місяців тому

    Amazing!
    I was looking for some material like this a long time ago and only found it here, beautiful :D

    • @far1din
      @far1din  7 місяців тому +1

      Thank you my brother 🔥

  • @MarcosDanteGellar
    @MarcosDanteGellar Рік тому

    the animations were super useful, thanks!

  • @paedrufernando2351
    @paedrufernando2351 Рік тому

    your channel is a Hidden Gem..My suggestion is to start a discord and get some crowd functing and one on ones for people who want to learn from you..youa re gifted in teaching.

  • @boramin3077
    @boramin3077 22 дні тому

    Best video to understand what is going on the under the hood of CNN.

  • @shazzadhasan4067
    @shazzadhasan4067 8 місяців тому

    Great explanation with cool visual. Thanks a lot.

    • @far1din
      @far1din  8 місяців тому

      Thank you my friend 😃

  • @aliewayz
    @aliewayz 2 місяці тому

    really beautiful, thanks.

  • @markuskofler2553
    @markuskofler2553 Рік тому +5

    Couldn’t explain it better myself … absolutely amazing and comprehensible presentation!

  • @Peterpeter-hr8gg
    @Peterpeter-hr8gg 9 місяців тому +1

    what i was looking for. well explained

  • @osamamohamedos2033
    @osamamohamedos2033 3 місяці тому

    Masterpiece 💕💕

  • @elgs1980
    @elgs1980 Рік тому +1

    Thank you so much!!! This video is so so so well done!

    • @far1din
      @far1din  Рік тому

      Thank you. Hope you got some value out of this! 💯

  • @heyman620
    @heyman620 10 місяців тому +2

    What a masterpiece.

  • @user-gg2ov3up5k
    @user-gg2ov3up5k 11 місяців тому

    Nicely put, thank you so much.

  • @akshchaudhary5444
    @akshchaudhary5444 6 місяців тому

    amazing video thanks!

  • @LeoMarchyok-od5by
    @LeoMarchyok-od5by 3 місяці тому

    Best explanation

  • @Joker-ez2fm
    @Joker-ez2fm 7 місяців тому

    Please do not stop making these videos!!!

    • @far1din
      @far1din  7 місяців тому +1

      I won’t let you down Joker 🔥🤝

  • @aikenkazin4096
    @aikenkazin4096 8 місяців тому

    Great explanation and visualization

    • @far1din
      @far1din  8 місяців тому

      Thank you my friend 🔥🚀

  • @jayeshkurdekar126
    @jayeshkurdekar126 Рік тому +2

    You are a great example of fluidity of thought and words..great explanation

  • @AsilKhalifa
    @AsilKhalifa 26 днів тому

    Thanks a lot!

  • @PlabonTheSadEngineer
    @PlabonTheSadEngineer 6 місяців тому

    please continue your videos !!

  • @gregorioosorio16687
    @gregorioosorio16687 10 місяців тому +1

    Thanks for sharing!

  • @objectobjectobject4707
    @objectobjectobject4707 Рік тому

    Great example thanks a lot

  • @harshitbhandi5005
    @harshitbhandi5005 8 місяців тому

    great explanation

  • @ManishKumar-pb9gu
    @ManishKumar-pb9gu 7 місяців тому

    thanku you so much for this

  • @samiswilf
    @samiswilf Рік тому

    Well done.

  • @yuqianglin4514
    @yuqianglin4514 8 місяців тому

    fab video! help me a lot

    • @far1din
      @far1din  8 місяців тому

      Glad to hear that you got some value out of this video! :D

  • @ziligao7594
    @ziligao7594 2 місяці тому

    Amazing

  • @farrugiamarc0
    @farrugiamarc0 4 місяці тому

    This is a topic which is rarely explained online, but it was very clearly explained here. Well done.

  • @SolathPrime
    @SolathPrime Рік тому +7

    Well explained now I need to code it my self

    • @far1din
      @far1din  Рік тому +4

      Haha, that’s the hard part

    • @SolathPrime
      @SolathPrime Рік тому +5

      @@far1din I think I came up with a solution Here
      def backward(self, output_gradient, learning_rate):
      kernels_gradient = np.zeros(self.kernels_shape)
      input_gradient = np.zeros(self.input_shape)
      for i in range(self.depth):
      for j in range(self.input_depth):
      kernels_gradient[i, j] = convolve2d(self.input[j], output_gradient[i], "valid")
      input_gradient[j] += convolve2d(output_gradient[i], self.kernels[i, j], "same")
      self.kernels -= learning_rate * kernels_gradient
      self.biases -= learning_rate * output_gradient
      return input_gradient
      First i initialized the kernel gradient as an array of zeros with the kernel shape
      then I iterated through the depth of the kernels the the depth of the input then for each gradient withe respect to the kernel
      I did the same to compute the input gradients
      Your vid helped me understand the backward method better
      So I have to say thank you sooo much for it

    • @SolathPrime
      @SolathPrime Рік тому

      @@far1din I'll document the solution and but it here when I do please pin the comment

    • @far1din
      @far1din  Рік тому +1

      @@SolathPrime That’s great my friend. Will pin 💯

  • @OmidDavoudnia
    @OmidDavoudnia 3 місяці тому

    Thanks.

  • @simbol5638
    @simbol5638 8 місяців тому

    +1 sub, excellent video

    • @far1din
      @far1din  8 місяців тому

      Thank you! 😃

  • @r0cketRacoon
    @r0cketRacoon 10 днів тому

    tks u very much for this video, but it's probably more helpful if you also add a max pooling layer.

  • @rodrigoroman4886
    @rodrigoroman4886 9 місяців тому +2

    Great video!! Your explanation is the best I have found.
    Could you please tell me what software you use for the animations ?

    • @far1din
      @far1din  9 місяців тому

      I use manim 😃
      www.manim.community

  • @PeakyBlinder-lz2gh
    @PeakyBlinder-lz2gh 6 місяців тому

    thx

  • @govindnair5407
    @govindnair5407 4 місяці тому

    What is the loss function here, and how are the values in the flattened z matrix used to compute yhat ?

  • @im-Anarchy
    @im-Anarchy 8 місяців тому +2

    perfect, one suggestion make videos a little longer 20-30 is a good number

    • @far1din
      @far1din  8 місяців тому +1

      Haha, most people don't like these kind of videos too long. Average watchtime for this video is about 3minutes :P

    • @im-Anarchy
      @im-Anarchy 8 місяців тому +1

      ​@@far1din​oh shii! 3 minutes, that was very unexpected, maybe it's because people revisit the video to revise specific topic.

    • @far1din
      @far1din  8 місяців тому

      Must be 💯

  • @piyushkumar-wg8cv
    @piyushkumar-wg8cv 9 місяців тому +1

    Great explanation. Can you please tell which tool do you use for making these videos.

    • @far1din
      @far1din  9 місяців тому

      Thank you my friend! I use manim 😃
      www.manim.community

  • @arektllama3767
    @arektllama3767 Рік тому +2

    1:15 why do you iterate in steps of 2? If you iterated by 1 then you could generate a 3x3 layer image. Is that just to save on computation time/complexity or is there something other reason for it?

    • @far1din
      @far1din  Рік тому +3

      The reason why I used a stride of two (iterations in steps of two) in this video is partially random and partially because I wanted to highlight that the stride when performing backpropagation should be the same as when performing the forward propagation. In most learning materials I have seen, they usually use a stride of one, hence a stride of one for the backpropagation. This could lead to confusion when operating with larger strides.
      The stride could technically be whatever you like (as long as you keep it within the dimensions of the image/matrix). I could have chosen another number for the stride as you suggested. In that case, with a stride of one, the output would be a 3 x 3 matrix/image. Some will say that a shorter stride will encapsulate more information than a larger one, but this becomes “less true” as the size of the kernel increases. As far as I know there are no “rules” for when to use larger strides and not. Please let me know if this notion has changed as everything changes so quickly in this field! 🙂

    • @arektllama3767
      @arektllama3767 Рік тому +3

      @@far1din I never considered how stride length could change depending on kernel size. I guess that makes sense, the larger kernel could cover the same data as a small kernel, just in fewer steps/iterations. I also figured you intentionally generated a 2x2 image since that’s a lot simpler than a 3x3 and this an educational video. Thanks for the feedback, that was really insightful!

  • @SiddhantSharma181
    @SiddhantSharma181 2 місяці тому

    Is the stride only along the rows, and not along columns? Is is common or just simplified?

  • @user-ki3jf6gu6l
    @user-ki3jf6gu6l 4 місяці тому

    I've had no trouble learning about the 'vanilla' neural networks. Although your videos are great, I can't seem to find resources that delve a little deeper into the explanations of how CNNs work. Are there any resources you would recommend ?

  • @ItIsJan
    @ItIsJan 10 місяців тому

    5:24
    does this just mean we divide z1 by w1 and ultiply by L divided by z1 and do that for all z'S to get the partial derivative of L in respect to w1?

    • @far1din
      @far1din  9 місяців тому

      It’s not that simple. Doing the actual calculations is a bit more tricky. Given no activation function, Z1 = w1*pixel1 + w2*pixel2 + w3*pixel3… you now have to take the derivative of this with respect to w1, then y = z1*w21 + z2*w22… take the derivative of y with respect to z1 etc. The calculus can be a bit too heavy for a comment like this.
      I’ll highly reccomend you watch the video by 3blue1brown: ua-cam.com/video/tIeHLnjs5U8/v-deo.htmlsi=Z6asTm87XWcW1bVn 😃

  • @bnnbrabnn9142
    @bnnbrabnn9142 4 місяці тому

    What about the weights of the fully connected layer

    • @mandy11254
      @mandy11254 2 місяці тому

      No point in adding it to this video since that's something you should know from neural networks. That's why he just leaves it as dL/dzi.

  • @MoeQ_
    @MoeQ_ Рік тому +2

    dL/dzi = ??

    • @far1din
      @far1din  9 місяців тому +1

      I explain the term at 4:17.
      It might be a bit unclear so I’ll highly reccomend you watch the video from 3blue1brown: ua-cam.com/video/tIeHLnjs5U8/v-deo.htmlsi=Z6asTm87XWcW1bVn 😃

  • @user-oq7ju6vp7j
    @user-oq7ju6vp7j 9 місяців тому

    You have nices videos, that helped me better understand the concept of CNN. But, from this video, it is not really obvious that matrix dL/dw - is convolution of image matrix and dL/dz matrix, as showed here ua-cam.com/video/Pn7RK7tofPg/v-deo.html. The stride of two is also a little bit confusing

    • @far1din
      @far1din  9 місяців тому

      Thank you for the comment! I believe he is doing the exact same thing (?)
      I chose to have a stride of two in order to highlight that the stride should be similar to the stride used during the forward propagation. Most examples stick with a stride of one. I now realize it might have caused some confusion :p

  • @burerabiya7866
    @burerabiya7866 Рік тому

    Hello well explained. I need your presentation

    • @far1din
      @far1din  11 місяців тому

      Just download it 😂

  • @int16_t
    @int16_t 10 місяців тому

    w^* is an abuse of math notation, but it's convenient.

  • @CorruptMem
    @CorruptMem Рік тому

    I think it's spelled "Convolution"

    • @far1din
      @far1din  Рік тому +1

      Haha thank you! 🚀