Tutorial 4: How to train Neural Network with BackPropogation

Поділитися
Вставка
  • Опубліковано 2 гру 2024

КОМЕНТАРІ • 218

  • @2die4mario
    @2die4mario 3 роки тому +56

    This is really simplified. Greatly appreciated. Much better than those university professors who get obsessed in the math without showing the audience the big picture !!

    • @anketsonawane6651
      @anketsonawane6651 3 роки тому +1

      Yup! They uses lot of symbols... Sometimes even its hard to remember those symbols

    • @study_with_thor
      @study_with_thor 3 роки тому

      can't agree more!

    • @Ranjithbhat444
      @Ranjithbhat444 2 роки тому

      Beautiful, being a professor myself I feel often people fail to focus on the big picture ; it is like we are thinking the local minima as a global minima 😅

    • @priyanshisingh9329
      @priyanshisingh9329 9 днів тому

      I can't agree more !! I was trying to do an expensive course, but got back here. It feels like senior padha rahe hai

  • @devgak7367
    @devgak7367 4 роки тому +11

    you just nailed it. Simplicity of his explination is unmatched anywhere. Thanks you sir.

  • @varunss9057
    @varunss9057 4 роки тому +8

    I am new to deep learning. The content you provide helps me understand the concept from the very basic and this clarity i could not find in any videos. Keep up the good work!!!!!You are doing a great job.

  • @anikethdeshpande8336
    @anikethdeshpande8336 4 роки тому +5

    The best explanation on NN I've ever seen so far. Thanks for going ahead step by step and explaining in simple words

  • @priyanath2754
    @priyanath2754 4 роки тому +20

    I have taken courses in other platforms, but I must say, the simplicity I found in your explanation helps me grab the topic much easily🙏🙏🙏

  • @isiomalisaclement6254
    @isiomalisaclement6254 Рік тому

    This channel is a discovery...you have been able to cut through all the jargon and make these theories less abstract. Thank you very much

  • @arpittiwari6590
    @arpittiwari6590 4 роки тому +1

    Sir I had watch more than 10 videos but didn't understand now in 9 mint I had understood very well. Simply awesome!!!!!!!!!!!

  • @mohitrock100
    @mohitrock100 4 роки тому +4

    Great Video Sir. I haven't seen such a simple explanation of such a brainstorming concept. Many people just take days to understand Backpropagation and here you have cleared my concepts withing 10 minutes.

  • @indirasakthivel716
    @indirasakthivel716 2 роки тому +1

    You are genius!!! No other instructor can teach in 10 minutes this complicated concept👏👏👏 Thank you🙏

  • @sanchitsawant4450
    @sanchitsawant4450 3 роки тому +1

    Sir I don't know why but whenever I watch your videos I get motivated to go in the more depth of that particular topic.Thank u sir sharing such valuable content for free .

  • @harshaddeshmukh2935
    @harshaddeshmukh2935 Рік тому +2

    Krish sir you are really amazing! You have made difficult concepts simpler! Our teachers are not able to do this, but you have done it very well! Thanks for being with us in this learning journey!

  • @souravdey1227
    @souravdey1227 3 роки тому +1

    Finally the stuff I had been looking for. Simple and to the point.

  • @ShravaniDeshpande-xw5bu
    @ShravaniDeshpande-xw5bu 3 місяці тому +1

    finally found a video, that helped me understand this topic. its relief + satisfaction

  • @rakshitraushan1650
    @rakshitraushan1650 4 роки тому

    Clearly the best DL and ML teacher!

  • @sarkandawale
    @sarkandawale 4 роки тому

    Sir there is one thing for sure to tell u, your way of teaching is so relatable and hence easy to understand, Also very effective, I've went through so many tutorials and lectures but it is all making sense now in your video, I m very thankful to u sir keep teaching,. u r a great teacher.

  • @mhdomarbahra1678
    @mhdomarbahra1678 5 років тому +7

    magnificent explanation the simplicity is perfect hard is made easy with you thanks!

  • @cybergame.
    @cybergame. 3 роки тому

    Sir your videos are much more better than coursera courses...........Thank You.

  • @RAVINDRABACHATE
    @RAVINDRABACHATE 4 роки тому +1

    Your are great.
    The way you explains, anyone can understand.
    Thank you.

  • @chandnisoni5108
    @chandnisoni5108 5 років тому +5

    Awesome playlist. Thank you for sharing your knowledge 😊🤟

  • @pareshb6810
    @pareshb6810 3 роки тому +2

    Underrated content!
    Keep up the good work! 💯

  • @nagrajwellness9720
    @nagrajwellness9720 2 роки тому

    Mind blowing video sir this is the main difference between you and others other institutions run behind the money

  • @hari.prasad_
    @hari.prasad_ 4 роки тому

    This course is simple and clear than most of the courses out there.

  • @commonboy1116
    @commonboy1116 4 роки тому

    Way you have explain complex topic in such simple way ......

  • @kosk1997
    @kosk1997 4 роки тому

    The best video so far. So clear and crisp. Hatsoff sir!!

  • @_al00sk
    @_al00sk Рік тому

    Your style of explanation is helping me a lot - thanks for these videos!

  • @naresh8198
    @naresh8198 Рік тому

    you have explained it in the simplest way ,Thank you !

  • @maralazizi
    @maralazizi 3 роки тому

    Best tutorial about deep learning ever!! Thank you so much for making it easy to understand! You are very much appreciated!

  • @codderrrr606
    @codderrrr606 2 роки тому

    bro all these videos simply tells how deeply you have the knowledge of deep learning

  • @harshalibhoyar6768
    @harshalibhoyar6768 Рік тому

    khatrnak sikhata he ye banda...... ek bar dekh lo pure dhyan se fir nind mai bhi koi puch le no issue ....chhap jata h dimag mai ......

  • @sapnilpatel1645
    @sapnilpatel1645 Рік тому

    so far you are the best teacher.

  • @mbmathematicsacademic7038
    @mbmathematicsacademic7038 5 місяців тому

    this guy explained optimzation,backpropogation and learning rates all in few mins

  • @suryakirannvs
    @suryakirannvs 3 роки тому

    Thank you Sir. Amazing.!!! Derivative: Ratio of change of dependent variable w.r.t independent variable

  • @liass6354
    @liass6354 6 місяців тому

    thank you for making this video! hope I can get a good grade in next week exam on AI

  • @adityashewale7983
    @adityashewale7983 Рік тому

    hats off to you sir,Your explanation is top level, THnak you so much for guiding us...

  • @iamdare
    @iamdare 3 роки тому

    Wow man, you’re a blessing. Thank you for this great teaching, you simplified everything.

  • @laxya6779
    @laxya6779 4 роки тому +17

    I came here after watching Coursera course and I think it's more clear and magnificent 🤤

  • @manukhurana97
    @manukhurana97 4 роки тому +1

    hi krish , At 3:30 you said we can do square to make it +vs , we can also use |mod| to make it +ve.

  • @danilzubarev2952
    @danilzubarev2952 2 роки тому

    How is everything makes sense? Wow, so inspiring! Amazing.

  • @shikhajangra2760
    @shikhajangra2760 Рік тому

    thank you so much sir, your way to simplify this problem is very nice.👍👍👍👍👍👍thanks again sir.

  • @aneeshkrishna4375
    @aneeshkrishna4375 8 місяців тому +3

    I have paid fortunes to do masters in data science and ended up watching your videos. You deserve a nobel prize for all these videos.

  • @islamuddin6021
    @islamuddin6021 2 роки тому

    Amazing work, sir gi, love it, And Thank you for such a concise and understandable explanation.

  • @vishwaskabbur4367
    @vishwaskabbur4367 4 роки тому

    perfectly explained !!! Simple and to the point !... Kudos ....

  • @mdsaif831
    @mdsaif831 4 роки тому

    best trainer i have ever seen....

  • @AgaGrusz
    @AgaGrusz 4 роки тому +2

    You are what I needed. Thank You soooo much :)

  • @dushyantsingh4278
    @dushyantsingh4278 4 роки тому

    Thank you sir for clearing all the doubts by this video

  • @fet1612
    @fet1612 5 років тому

    @Krish Naik
    How methodical your work is! Brilliant! Keep it up. Your videos clarify things lucidly.

  • @Jane-ce2dq
    @Jane-ce2dq 4 роки тому +3

    Krish thanks for work.
    Tutorial 4 supposed to be Activation function part 2.

  • @RanjithKumar-jo7xf
    @RanjithKumar-jo7xf 2 роки тому

    Nice Explanation, I like the way you teach.

  • @almassaba9377
    @almassaba9377 2 роки тому

    This was amazing. Thankyou Kris. Thankyou for existing.

  • @indranisen5877
    @indranisen5877 2 роки тому

    Excellent, you are our confidence!

  • @sriramswar
    @sriramswar 5 років тому +2

    Hi Krish, What are the ideal Learning rates that need to be used? How do we decide which Learning rate value is ideal for a Neural network?

    • @krishnaik06
      @krishnaik06  5 років тому

      Yes it will be mentioned in the upcoming videos

  • @w.a.imadhusanka1578
    @w.a.imadhusanka1578 Рік тому

    The things taught are well understood.thank you sar🥰🥰

  • @yashdeshpande9202
    @yashdeshpande9202 Рік тому

    I think that , during the probleme discussed about pass or fail i.e. binary classification the loss function should not MSE . It should be cross entropy loss i.e. -ylog(y)+(1-y)log(1-y)

  • @mohdazam1404
    @mohdazam1404 4 роки тому +3

    Damn good explanation .... One question, how we choose learning rate??

  • @priyasanthakumara6520
    @priyasanthakumara6520 Рік тому

    Amazing....great job with lot of thanks....

  • @UmamahBintKhalid
    @UmamahBintKhalid Рік тому

    Well done bro, simple and precise

  • @mashroorsakib4006
    @mashroorsakib4006 4 роки тому

    Awesome explanation sir. Thank you for sharing your knowledge

  • @bilalghauri6516
    @bilalghauri6516 3 роки тому

    (1):why we use loss function
    (2): how should we know that the loss value is minimal or increased
    (3): where should we find the learning rate?

  • @fthialbkosh1632
    @fthialbkosh1632 4 роки тому

    Thank you, Sir, for your sharing, with perfect explanations.

  • @seedcardsstore882
    @seedcardsstore882 4 роки тому

    you make this topic seem so easy!!!!! Thank you!

  • @bivasbisht1244
    @bivasbisht1244 2 роки тому

    what an Explanation !!!! amazing

  • @vasachisenjubean5944
    @vasachisenjubean5944 3 роки тому

    the man, the myth, the legend

  • @rohitjagdale4648
    @rohitjagdale4648 3 роки тому

    Excellent explanation !!! One question : what happens to biases in backward propagation ?

  • @citoyennumero4434
    @citoyennumero4434 3 роки тому

    Simple et concis, je vous remercie pour l'explication :) :)

  • @Qutybar
    @Qutybar 5 років тому +1

    You deserve 1M like

  • @girish2555
    @girish2555 3 роки тому

    Simply great 🙏

  • @abdulqadar9580
    @abdulqadar9580 2 роки тому

    Thank you Sir for your great efforts

  • @sandipansarkar9211
    @sandipansarkar9211 4 роки тому

    That was a superb video.But now things are getting tougher and tougher.Need to cope up with.

  • @manishsharma2211
    @manishsharma2211 5 років тому +1

    Mahn. You are too good

  • @ochanabondhu
    @ochanabondhu 2 роки тому

    Wonderful video. Just one question, why we are not taking the mod value of the Loss function and going for squared values?

  • @sagardesai1253
    @sagardesai1253 3 роки тому

    Awesome explanation 👌

  • @subhadipghosh8194
    @subhadipghosh8194 3 роки тому

    @Krish Naik As this is a classification problem, how can it use squared error as a loss function?

  • @AmitYadav-ig8yt
    @AmitYadav-ig8yt 5 років тому +1

    Thank you, Sir, May you please make videos on Unsupervised ML algorithms ?

  • @manikantamamidipaka6876
    @manikantamamidipaka6876 4 роки тому

    Thank you Mr.Krish, you just nailed it.
    I have a question here, how to find loss at any hidden neuron for backpropagation purpose since we will not be knowing the actual value at any hidden neuron except at the output node, right? then how?
    Thanks in advance.

    • @bhargavpotluri5147
      @bhargavpotluri5147 4 роки тому

      We are reducing the weights gradually at the hidden neuron so that indirectly it reduces the loss/ cost function at the output node. We dont calculate loss at hidden neuron

  • @satishkundanagar3237
    @satishkundanagar3237 3 роки тому +1

    Could you please clarify my following doubts? Thanks in advance.
    1. Are you using any affine function and/or activation function in the output layer node in order to calculate y_hat? Reason being, weight W4 is passed as input to the output layer node and no details are mentioned about the usual two step process that take place in every node of the neural network i.e. an affine function (where weights are actually used) and an activation function.
    2. Is it or is it not the cost function is average of the loss function for individual training samples? Cost function is defined as summation of loss function in this video and not average?
    3. I'm not clear on why propagation helps in tuning the model parameters. Back propagation and Gradient descent work together in tuning the weights. Mathematically and geometrically I'm not convinced with the statement "back propagation is used to train the weights of a neural network".

    • @zindankurt1289
      @zindankurt1289 3 роки тому

      Actually, I think all of y will pass through all neurons. Therefore, the final output will be calculated over the y_hat = w4*(y_prev). When we include the biases in this calculation, the output from a neuron in a layer will be y_prev = (W1*X + biases) -> y_nextneuron = (W2*y_prev + biases).

  • @praneethaluru4801
    @praneethaluru4801 4 роки тому

    Literally great explanation brother.

  • @taoufiksouidi6060
    @taoufiksouidi6060 4 роки тому

    thank you very much sir!! u gained a subscriber.

  • @sandy926
    @sandy926 3 місяці тому

    Thank you for this sir ,tmrw I am having an interview

  • @ftt5721
    @ftt5721 3 роки тому

    maza aa gaya...awesome

  • @vishaljhaveri6176
    @vishaljhaveri6176 3 роки тому

    Nice video. Thank you sir.

  • @satirthapaulshyam7769
    @satirthapaulshyam7769 3 роки тому

    Btw how the loss function has defined.like lets suppose u have given a random weight first and depending on that u have get a average lost value for all the train sample.so for a specific set
    of weight u r getting a specific lost value than how u r getting a function.cz u have just got a specific output for a specific input.it Doesn't make anysense about the costfunction

  • @burhangarari8164
    @burhangarari8164 4 роки тому

    Does the bias associated with the neuron in the hidden layer also need to be updated during backpropagation?

  • @sriramanramalingam9892
    @sriramanramalingam9892 4 роки тому

    Sir, thanks for providing wonderful videos. Sir if possible could you provide a basic explanation for construction of neural networks in equation format (in first order differential equations) including state vector, activation functions.
    Thank you sir

  • @mehakrajput1649
    @mehakrajput1649 4 роки тому +1

    Thanks sir. I found your tutorials interesting. I have a question ... what is global minimum and Gradient descent the words that you have used in your lecture. Can you please elobrate. If any video related to thz is available kindly share the link. Thank you

    • @Syedzee
      @Syedzee 4 роки тому

      Watch next 2 videos you will get to know.

  • @nuamanjaleel5430
    @nuamanjaleel5430 Рік тому

    Sir could you please explain back propagation with an example, It would be of great help as that's the part where most of us make mistake

  • @skipjack02
    @skipjack02 4 роки тому +2

    Thanks for turning off the fan! :D

  • @anshi6205
    @anshi6205 3 роки тому

    Your lectures is amazing and very helpful but you looks so serious in every video

  • @akiya2112
    @akiya2112 4 роки тому +1

    thanks sir but I have one question how can I get w1,w2,w3.... from the beginning?

  • @DANstudiosable
    @DANstudiosable 5 років тому

    Well explained... So back prop and fwd prop both happens in 1 epoch at the same time?

  • @travel_for_life9727
    @travel_for_life9727 Рік тому

    Sir do we need to update bais when we do backward propagation

  • @hashimhafeez21
    @hashimhafeez21 3 роки тому

    pretty great explanation

  • @foxfinance9362
    @foxfinance9362 3 роки тому

    simply the best

  • @aajaykapoor
    @aajaykapoor 4 роки тому

    Hi Krish.... I am a regular learner from your videos, this is great.
    I have a question, in the forward propagation for the very first iteration where and how do we get the values of weights?

    • @krishnaik06
      @krishnaik06  4 роки тому +3

      There are some weight initialization techniques. Just go ahead with the videos u will be see the video

    • @aajaykapoor
      @aajaykapoor 4 роки тому

      @@krishnaik06 thanks for the prompt response... I will follow the videos.

  • @RishikeshGangaDarshan
    @RishikeshGangaDarshan 3 роки тому

    In classification the loss function should be different like log loss here sir you use regression loss function please correct if I am wrong

  • @RAZZKIRAN
    @RAZZKIRAN 4 роки тому +1

    GD vs SGD vs PSO vs GA ?
    please give the efficiencies of these optimizers?

  • @bibhupadhy4155
    @bibhupadhy4155 3 роки тому

    Krish, I think it would have been better if you would have taken , Regression example and explained it , Coz the loss function you are showing over here is squared error, which is not the correct loss function for a classification problem, rather binary cross entropy is . May be you can rectify it. :).
    Even the cost function , You forgot to divide it by number of samples .

  • @hokapokas
    @hokapokas 5 років тому +1

    Kish can you explain bias as well because I believe, we can adjust bias as well as a measure of back propagation. Pls guide around it

    • @MegaAntimason
      @MegaAntimason 5 років тому

      Yes bias is also a parameter that undergoes back propagation

    • @hokapokas
      @hokapokas 5 років тому

      @@MegaAntimason do you have any references around it

    • @hokapokas
      @hokapokas 5 років тому

      Sorry. . Forgot to say hi.. .

    • @MegaAntimason
      @MegaAntimason 5 років тому +1

      @@hokapokas datascience.stackexchange.com/questions/20139/gradients-for-bias-terms-in-backpropagation

    • @hokapokas
      @hokapokas 5 років тому

      @@MegaAntimason thanks 🙏🙇🙏💕

  • @shruthikeerthi6231
    @shruthikeerthi6231 2 роки тому

    excellent sir.sir what is learning rate

  • @shawnsingh9605
    @shawnsingh9605 4 роки тому +1

    How do I get access to Tutorial 2 of Tutorial 1- Introduction to Neural Network and Deep Learning

  • @venkateshmorishetty5489
    @venkateshmorishetty5489 3 роки тому

    Hi sir, what is the reason behind derivating loss with respect to weights. with that what we will get?