Forward Propagation and Backward Propagation | Neural Networks | How to train Neural Networks

Поділитися
Вставка
  • Опубліковано 1 гру 2024

КОМЕНТАРІ • 106

  • @MachineLearningWithJay
    @MachineLearningWithJay  3 роки тому +10

    If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.

    • @arpit743
      @arpit743 3 роки тому

      Excellent video! .Bro why do we have multiple nuerons in every hidden layer. is it from the point of view of introducing non linearity?

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому

      @@arpit743 Yes, but not entirely. Multiple neurons allow us to capture complicated patterns. A single neuron won’t be able to capture complicated patterns from the dataset.

    • @arpit743
      @arpit743 3 роки тому

      @@MachineLearningWithJay thanks alot! but why is that it allows for complicated boundaries?

    • @Sigma_Hub_01
      @Sigma_Hub_01 2 роки тому +1

      @@arpit743 more refined outputs will allow to see the limitations to your network boundaries...and hence u can pinpoint at exact location and correct it as per your needs. It doesn't allows for complicated boundaries , u are ALLOWED to see your complicated boundaries,and hence work thru it

    • @debanjannanda2081
      @debanjannanda2081 2 місяці тому

      Sir,
      their is a mistake on timestamp 0.41 where
      a2[1] is wrong , A2[2] is the activation function you written second becouse the value you multiply with the weight in weighted sum those value of a is for first hidden layer which use to find the second hidden layer value A2[2] not a2[1].
      But, you really teach great.
      thank you...

  • @coverquick490
    @coverquick490 2 роки тому +7

    I've always felt as if I was on the cusp of understanding neural nets but this video brought me past the hump and explained it perfectly! Thank you so much!

    • @MachineLearningWithJay
      @MachineLearningWithJay  2 роки тому

      I am really elated hearing this. Glad if helped you out. Thank you so much for your appreciation. 🙂

  • @kheireddine7889
    @kheireddine7889 2 роки тому +7

    this video should be titled " Explain - Forward and Backward Propagation - to Me Like I'm Five. Thanks man you saved me a lot of time.

    • @MachineLearningWithJay
      @MachineLearningWithJay  2 роки тому +1

      One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄

    • @MachineLearningWithJay
      @MachineLearningWithJay  2 роки тому +1

      One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄

  • @hajiclub
    @hajiclub 13 днів тому +2

    Boy you have a wonderful way of explaining things. Thanks loadsss :))

  • @Amyx11
    @Amyx11 Рік тому +1

    Literally best. Crisp and clear!! Thank you

  • @farabiislam2418
    @farabiislam2418 Рік тому +2

    You explain better than popular course instructor on deep learning

    • @MachineLearningWithJay
      @MachineLearningWithJay  Рік тому

      Thanks for the compliment 😇

    • @sajan2980
      @sajan2980 Рік тому

      I am sure he is talking about Andrew Ng Lol. His explanation on that video is too detailed and the notations are too confusing lol. But the same explanation in his Machine Learning Specialization course is much better.

  • @PrithaMajumder
    @PrithaMajumder 4 місяці тому +2

    Thanks a lot for This Amazing Introductory Lecture 😊
    Lecture - 2 Completed from This Neural Network Playlist

  • @whoooare20
    @whoooare20 3 роки тому +3

    you explained in very clear and easy ways. Thank you, this is so helpful!

  • @sushantregmi2126
    @sushantregmi2126 2 роки тому +1

    so glad I found this channel!!

  • @kunalbahirat7795
    @kunalbahirat7795 2 роки тому +1

    best video on youtube for this topic

  • @saumyaagrawal7781
    @saumyaagrawal7781 3 місяці тому +1

    This was more helpful than my lectures!

  • @social.2184
    @social.2184 7 місяців тому

    Very informatics video.Explained all the terms in a simple manner.Thanks alot

  • @nishigandhasatav3559
    @nishigandhasatav3559 2 роки тому +1

    Absolutely loved the way you explain. So easy to understand. Thank you

  • @nooreldali7432
    @nooreldali7432 Рік тому

    Best explanation I've seen so far

  • @venompubgmobile7218
    @venompubgmobile7218 3 роки тому +8

    Im a bit confuse through the exponent notations since some of it were not corresponding to the other

  • @petchiammala1430
    @petchiammala1430 2 роки тому +1

    Super sir. I have learned more information from this and also calculation way. It's very useful to our study. Thank you sir

  • @maximillian7310
    @maximillian7310 2 роки тому +1

    Thanks man. The slides were amazingly put up.

  • @rawanmohammed5552
    @rawanmohammed5552 3 роки тому +1

    You are great. It will be very good if you continue.

  • @harshwardhankurale310
    @harshwardhankurale310 4 місяці тому +1

    Top Class Explanation!

  • @VC-dm7jp
    @VC-dm7jp 2 роки тому +1

    Such a simple and neat explanation.

  • @AryanSingh-eq2jv
    @AryanSingh-eq2jv Рік тому

    best explanation, best playlists
    I don't usually interact with the algorithm much by giving likes and dropping comments or liking but you beat me into submission with this. Hopefully I understand the rest of it too lol.

  • @chandanpramanik4399
    @chandanpramanik4399 Рік тому

    Nicely explained. Keep up the good job!

  • @johnalvinm
    @johnalvinm Рік тому

    Very helpful and to the point and correct!

  • @omarsheetan4417
    @omarsheetan4417 3 роки тому +1

    Great video, and great explanation thanks dude!

  • @blackswann9555
    @blackswann9555 4 дні тому +1

    Excellent video sir

  • @ahmeterdonmez9195
    @ahmeterdonmez9195 2 місяці тому +5

    at 0:58 in a1[1] = activation(....), last sum should be W13[1]*a3[0] not W13[1]*a3[1]

    • @blackswann9555
      @blackswann9555 4 дні тому

      I was just trying to figure this out. I agree, the slides previous have the wrong notation as well.

  • @bincybincy1
    @bincybincy1 6 місяців тому

    This is so well explained.. thankyou

  • @mdtufajjalhossain1246
    @mdtufajjalhossain1246 3 роки тому +1

    you are really awesome. love your teaching ability

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому +1

      Thank you so much !

    • @mdtufajjalhossain1246
      @mdtufajjalhossain1246 3 роки тому +1

      @@MachineLearningWithJay, you are most welcome bro. Please make the implementation of Multiclass Logistics Regression using OnevsAll/OnevsOne method

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому

      @@mdtufajjalhossain1246 Okay! Thanks for suggesting!

  • @kenjopac4247
    @kenjopac4247 2 роки тому +1

    This was actually pretty straight forward

  • @michaelzheng951
    @michaelzheng951 Рік тому

    Fantastic explanation. Thank you

  • @sabeehamehtab6954
    @sabeehamehtab6954 3 роки тому +1

    Awesome, really helpful! Thank you

  • @muhammadrabbanizainalabidi2409
    @muhammadrabbanizainalabidi2409 3 роки тому +1

    Good Explanation !!

  • @DAYYAN294
    @DAYYAN294 2 роки тому

    Excellent explanation jazakallah bro

  • @waleedrafi1509
    @waleedrafi1509 3 роки тому +1

    great video,
    Please also make a video on SVM as soon as possible

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому

      Okay Sure ! Thank you so much for your suggestion. I have been asked alot, to make video on SVM. So, I will try to make it just after finishing this Neural Network playlist .

  • @iZeyad95
    @iZeyad95 2 роки тому +1

    Amazing work, keep it going :)

  • @babaabba9348
    @babaabba9348 3 роки тому +1

    great video as always

  • @alpstech
    @alpstech 4 місяці тому +1

    You drop something ... 👑

  • @Swarnajit_Saha
    @Swarnajit_Saha Рік тому

    Your videos are very helpful. It will be great if you sort the video..Thank you😇😇😇

  • @PrinceKumar-el7ob
    @PrinceKumar-el7ob 3 роки тому +1

    thank u sir it was really helpful

  • @agrimgupta3221
    @agrimgupta3221 3 роки тому +2

    Your videos on neural networks are really good. Can you please also upload videos for generalized neural networks too, that would really be helpful P.S Keep Up the good work!!!

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому +1

      Thank you so much for your feedback. I will surely consider making videos on generalized neural networks.

  • @ibrahimahmethan586
    @ibrahimahmethan586 2 роки тому +1

    Good job. But Gradient descent W2 and W1 mus be updated simultaneously.

  • @ishayatfardin7
    @ishayatfardin7 Рік тому

    Brother your explanation was great but there are some mistakes i have pointed out.

  • @kewtomrao
    @kewtomrao 3 роки тому +2

    Isnt the equation : Z= W.X+B = transpose(W)*X + B.Hence the weight matrix what you have given is wrong right?

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому +2

      Hi... I have taken the shape of W as (n_h, n_x). Thus equation will be Z = W.X + B. But if you take W as (n_x, n_h), then equation of Z = transpose(W).X + B.
      Both represent the same thing. Hope it helps you.

    • @kewtomrao
      @kewtomrao 3 роки тому

      @@MachineLearningWithJay thanks for the quick clarification.makes sense now.keep up the great work!!

  • @taranerafati9730
    @taranerafati9730 Рік тому

    great video

  • @vipingautam9501
    @vipingautam9501 2 роки тому +1

    Small doubt, what is f(z1)...I am assuming these are just different type of activation functions...where input is just the weight of current layer*input from previous layers...is that correct?

    • @MachineLearningWithJay
      @MachineLearningWithJay  2 роки тому

      Yes correct… but do check out the equations properly. It has bias also.

    • @vipingautam9501
      @vipingautam9501 2 роки тому

      @@MachineLearningWithJay Thanks for your prompt response.

  • @premkumarsr4021
    @premkumarsr4021 8 місяців тому

    Super Bro❤❤❤❤

  • @marcoss2ful
    @marcoss2ful Рік тому

    where did came from the algorithm that calculates the next W in 5:30 ? I know it is intuitive, but does it have something to do about Euler's method ? Or another one ?
    Thank you so much for these incredible videos

  • @gautamthulasiraman18
    @gautamthulasiraman18 2 роки тому +1

    Sir it's W ¹¹[¹] * a⁰[1] right? You've done it as W ¹¹[¹] * a¹[1] at the matrix multiplication, can you just verify I'm wrong?

  • @testyourluck3914
    @testyourluck3914 Рік тому

    B1 and B2 are initialized randomly too ?

  • @nothing5987
    @nothing5987 3 роки тому +1

    hi can you put caption option

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому

      Hi.. somehow captions were not generated in this video. All my ohter videos do have caption. I will change the settings to bring caption in this video as well. Thanks for bringing this to my attention.

  • @xinli3642
    @xinli3642 2 роки тому +1

    Can A* actually be Z*, e.g. A1 = Z1?

    • @MachineLearningWithJay
      @MachineLearningWithJay  2 роки тому

      No, we need to apply a non-linear activation function. So A1 must be = some_non_linear_function(Z1)

  • @fadhliana
    @fadhliana 3 роки тому +1

    hi, how to calculate the cost?

    • @MachineLearningWithJay
      @MachineLearningWithJay  3 роки тому

      You will get all the information in upcoming videos that I have already uploaded in this series.
      If you still have questions, then you can write me mail on : codeboosterjp@gmail.com

  • @faisaljan3884
    @faisaljan3884 2 роки тому +1

    what is this B1

  • @gitasaheru2386
    @gitasaheru2386 2 роки тому

    Please share code algorithm backpropagate

  • @abdallahlakkis449
    @abdallahlakkis449 Рік тому

    y no subtitles?

  • @priyanshshankhdhar1910
    @priyanshshankhdhar1910 Рік тому

    wait you haven't explained backpropagation at all

  • @UmerMehmood-n3f
    @UmerMehmood-n3f 3 місяці тому

    Extremely confusing tutorial and there's a mistake
    This should be :
    A[3]⁰ not A[3]¹

  • @beypazariofficial
    @beypazariofficial Рік тому

    let bro cook

  • @mythillian
    @mythillian 8 місяців тому

    5:04

  • @gauravshinde8767
    @gauravshinde8767 7 місяців тому

    Lord Jay Patel

  • @abukhandaker7558
    @abukhandaker7558 7 днів тому +1

    You dont understand yourself