Chain Rule | Deep Learning Tutorial 15 (Tensorflow2.0, Keras & Python)

Поділитися
Вставка
  • Опубліковано 23 сер 2020
  • This video gives a very simple explanation of a chain rule that is used while training a neural network. Chain rule is something that is covered when you study differential calculus. Now don't worry about not knowing calculus, the chain rule is rather a simple mathematical concept. As a prerequisite of this video please watch my video on derivative in this sample deep learning series (link of entire series is below). It is even better if you watch this entire series step by step so that you have your foundations clear when you are watching this video.
    🔖 Hashtags 🔖
    #chainruleneuralnetwork #chainrule #chainrulepython #neuralnetworkpythontutorial #chainrulederivatives
    Do you want to learn technology from me? Check codebasics.io/ for my affordable video courses.
    Next Video: • Tensorboard Introducti...
    Previous video: • Stochastic Gradient De...
    Deep learning playlist: • Deep Learning With Ten...
    Machine learning playlist : ua-cam.com/users/playlist?list...
    Khan academy video on chain rule: • Chain rule | Derivativ...
    Prerequisites for this series:
    1: Python tutorials (first 16 videos): ua-cam.com/users/playlist?list...
    2: Pandas tutorials(first 8 videos): • Pandas Tutorial (Data ...
    3: Machine learning playlist (first 16 videos): ua-cam.com/users/playlist?list...
    Website: codebasics.io/
    Facebook: / codebasicshub
    Twitter: / codebasicshub

КОМЕНТАРІ • 105

  • @codebasics
    @codebasics  2 роки тому

    Do you want to learn technology from me? Check codebasics.io/ for my affordable video courses.

  • @SatyaBhambhani
    @SatyaBhambhani 2 роки тому +1

    Man, this series! I love it. As an Econometrician, I can see how well he connects even the basics like derivatives to the depths of gradient descent.

  • @satishsangole9102
    @satishsangole9102 2 роки тому +1

    Precise, Concrete, Complete, and Simple. You are simply amazing!

  • @sejalanand23
    @sejalanand23 Рік тому +2

    As simple as it could get. Thanks a lot!!! You have a gift, sir.

  • @Ankurkumar14680
    @Ankurkumar14680 3 роки тому +24

    Thanks for maintaining great pace of releasing new videos on this series....u r really a big help to lots of students and professionals around the globe. :)

    • @codebasics
      @codebasics  3 роки тому +2

      Thanks Ankur for your kind words :)

  • @RichardBronosky
    @RichardBronosky 2 роки тому +4

    When I was in school, I was never able to maintain interest in this long enough to grasp it. But now that I am doing robotics and have a need to reconcile the error between the Computer Vision prediction and physical coordinates, this is extremely interesting, and therefore easy to learn.
    Put another way, my steps are this:
    * Pull an image from a camera
    * Select a point (x,y) on the image to move finger to
    * Convert that to a point in space (x,y,z)
    * Move finger
    * Pull image from camera
    * Locate the point (x,y) on the image where the finger appears
    * Correct the error
    Well that simple sounding last step is where the process in your video applies.

  • @chankit4392
    @chankit4392 2 роки тому

    you are such a good teacher! those complex concept I never get understand them before watching your videos

  • @hugomartinez9370
    @hugomartinez9370 3 роки тому +2

    thanks a lot for the explanation, now I finally understand this :)

  • @vivekmodi3165
    @vivekmodi3165 3 роки тому

    Your explanation is too easy to understand. Your tutorials make the complex topic to simple. Thank you. You're great teacher for AI and machine learning.

  • @rndtest8691
    @rndtest8691 Рік тому

    Code Basics : your explanation is THE BEST ... You make very complex things very simple , easy to understand

  • @aakuthotaharibabu8244
    @aakuthotaharibabu8244 Рік тому

    I fell thankful to find a teacher like u sir!!!!!!!!

  • @chandralekhadhanasekaran3530

    Each time I see your videos, I am learning new things and getting excited. Keep Exciting us :)

  • @ramsunil4317
    @ramsunil4317 3 роки тому

    Thanks for explaining in a simple way .now the difficult puzzle has been solved for me.

  • @harshsawant4946
    @harshsawant4946 Рік тому +1

    You are a great teacher ... Hats off to you 🙌🙌

  • @karthikc8992
    @karthikc8992 3 роки тому +4

    Sir
    Looking great today
    Hope this continues 😊

  • @vinaybhujbal4347
    @vinaybhujbal4347 3 місяці тому

    Greatly explained 🙌🏾

  • @sooryaprakash6390
    @sooryaprakash6390 3 роки тому +1

    Best Video with a simple and clear explanation I have ever seen!.Will share with all my ml peeps!

  • @sandiproy330
    @sandiproy330 Рік тому

    Great content. Nice teaching with a lucid example. Thanks a lot for the effort.

  • @osho2810
    @osho2810 2 роки тому

    Very Easy..... and free from lots of confusion.. Thank youSir

  • @srinivasamaddukuri282
    @srinivasamaddukuri282 2 місяці тому

    you nailed it man

  • @shreyasb.s3819
    @shreyasb.s3819 3 роки тому

    U cleared my all doubt's. Thanks a lot

  • @trend_dindia6714
    @trend_dindia6714 3 роки тому

    Most clean explanation on back propagation 👌

  • @muhammednihas2218
    @muhammednihas2218 2 дні тому

    Thank you so much sir ,

  • @yashmaheshwari8420
    @yashmaheshwari8420 2 роки тому +2

    for all those who are asking why squaring error ?
    lets say your model output [0,1] for two inputs and actual output should be [1,0]
    calculate error without squaring or doing absolute :
    total error = ((actual first output - predicted 1)+(actual second output - predicted 2))/2
    total error = ((0-1) +(1-0))/2
    total error = (-1 + 1)/2
    total error = 0/2 =0
    what ?
    error is 0 our model is perfect ?
    squaring error this time :
    total error = ((0-1)^2 +(1-0)^2)/2
    total error = (1+1)/2=1

  • @GrowthPulse509
    @GrowthPulse509 5 місяців тому

    respect thank you so much sir

  • @fahadreda3060
    @fahadreda3060 3 роки тому

    Really Nice Video, Thanks man

  • @mahalerahulm
    @mahalerahulm 2 роки тому

    Very nice explanation ..

  • @navyak8717
    @navyak8717 3 роки тому

    Very helpful

  • @nastaran1010
    @nastaran1010 4 місяці тому

    great!

  • @AnkitKumar-cq4rc
    @AnkitKumar-cq4rc 3 роки тому

    Sir its really very helpful for us .
    hope yu update the videos as soon as possible.

  • @mohdsyukur1699
    @mohdsyukur1699 2 місяці тому

    You are the best my boss

  • @chayaravindra7649
    @chayaravindra7649 3 роки тому

    Thank you sir.because of you i got interest in learning Datascience,machine learning,,sir kindly upload more video on deep learning....

  • @yogeshbharadwaj6200
    @yogeshbharadwaj6200 3 роки тому +1

    Never thought Chain Rule concept is so simple.... now I'm getting confidence in Maths day by day....its all because of you sir....you are proving that "Way of Teaching" matters a lot in making students/any person understand the concepts......Hats Off !!

    • @codebasics
      @codebasics  3 роки тому +1

      👍☺️ thanks for your kind words yogesh

  • @RajaKumar-hg9wl
    @RajaKumar-hg9wl 2 роки тому

    Hats off to you!!! Very superb. I have gone through different courses / learnt from different places. But the concept of Chain Rule is not explained this much simpler way in other places. Thank you so much.

    • @codebasics
      @codebasics  2 роки тому

      Thanks I am glad you liked it

  • @nurbekss2729
    @nurbekss2729 2 роки тому

    So sad that you were not my math teacher at school.You are so good.

  • @achrafelkhanjari9157
    @achrafelkhanjari9157 2 роки тому

    greetings from morocco ,nice explanation

  • @anandanv2361
    @anandanv2361 3 роки тому

    This is extremely a nice video

  • @sanooosai
    @sanooosai 4 місяці тому

    thank you

  • @vijaya4740
    @vijaya4740 3 роки тому +1

    Thankyou for such a great explanation sir, I have seen all your ML and python pandas and numpy series,
    All were awesome. Hope you continue with same pace!!

    • @codebasics
      @codebasics  3 роки тому +3

      yes my plan is to upload more videos on deep learning and than start a new series on NLP

    • @oz4232
      @oz4232 3 роки тому

      @@codebasics this is very good news, sir.

  • @saadshqah8664
    @saadshqah8664 3 роки тому

    Thanks a lots

  • @khalidal-reemi3361
    @khalidal-reemi3361 3 роки тому

    Thanks alot man.
    You said that you will make a separate video for "why we square the difference and not take the absolute value"
    Please give me the link to that video if available because I was always asking my self this question.

  • @faezeabdolinejad731
    @faezeabdolinejad731 2 роки тому +1

    thanks , 😃😀

  • @rohanrocker
    @rohanrocker 3 роки тому

    One of best video in the youtube.

  • @kirankumarb2190
    @kirankumarb2190 3 роки тому

    Initially it felt quite confusing.. but later when you explained with example equations, I got it. Thankyou Sir.

  • @oz4232
    @oz4232 3 роки тому

    thanks.....

  • @royreed9148
    @royreed9148 4 місяці тому

    Hi, I think this video is great. I’m finished lesson 15. The part I don’t understand is at one point you talk about the derivative of loss with respect to the weights, and in the next lesson (15), you talk about derivatives between equations (and multiply them) that have nothing to do with loss. This is very confusing. I guess I’m just getting old.
    Anyway, it is true that you have a passion for this, and I wish you well.

  • @harishwarboss
    @harishwarboss 3 роки тому +1

    Hey man u look so young ...Thank u for sharing the knowledge it is absolutely helpful ..keep doing that 🥳🥳

    • @codebasics
      @codebasics  3 роки тому

      I am happy this was helpful to you.

  • @MissInesM
    @MissInesM 3 роки тому +1

    Back propagation seems almost trivial thanks to you. Thank you !

    • @codebasics
      @codebasics  3 роки тому

      Thank you for your kind words 😊👍

    • @HarshalDev
      @HarshalDev 3 роки тому

      +1
      he's an amazing teacher :')

  • @rushikeshdeshmukh2034
    @rushikeshdeshmukh2034 2 роки тому +3

    Excellent video lecture and this entire series is superb. Thanks a lot for these methodical, step by step systematic videos.
    I just would like to provide constructive feedback for the chain rule formula mentioned by you after 12:00 minutes in this video. There are two paths from A to Z one via X and another via Y. So the partial derivative of Z with respect to A should consider these two paths.

    • @rishikakumar4006
      @rishikakumar4006 8 місяців тому

      yes exactly, that needs to be added too right?

  • @santoshkumarmishra441
    @santoshkumarmishra441 3 роки тому

    REALLY GREAT EXPLANATIONS

  • @priyabratapanda1216
    @priyabratapanda1216 3 роки тому

    Thank you so much sir .Deep Learning seems very easy to me only because of you🙏🙏🙏.
    Sir one thing I have a little doubt in this tutorial that you haven't mentioned about the activation function after each hidden layers ?
    But Overall lecture was very intuitive 🙏🙏💓

  • @tugrulpinar16
    @tugrulpinar16 3 роки тому

    visuals helps a lot!

  • @nriezedichisom1676
    @nriezedichisom1676 Рік тому

    I love how you keep saying that this is easy whereas it is not😊

  • @zahidhassan4815
    @zahidhassan4815 3 роки тому

    Man, you have given awesome explanation, i was trying to understand from 3bluebrown but now i am able to join the puzzle and have a picture of what's happening in reality.

    • @codebasics
      @codebasics  3 роки тому +1

      I am happy this was helpful to you.

  • @tusharpatne6863
    @tusharpatne6863 3 роки тому +5

    Sir, please start the new series "Algorithms with Python"

    • @codebasics
      @codebasics  3 роки тому +5

      Yes I am going to do that

  • @vinayak254
    @vinayak254 Рік тому

    sir, you explain complex things in a simple manner. Thank you very much. I expect you put some deep learning/ machine learning projects with respect to image processing.

    • @codebasics
      @codebasics  Рік тому

      yes I do have an end to end project for potato plant disease prediction

    • @vinayak254
      @vinayak254 Рік тому

      @@codebasics Thank you sir

  • @ezekielizedonmi310
    @ezekielizedonmi310 2 роки тому

    I love you bro ... Thanks for these videos. Pls can I take a certification exam in Tensorflow after these tutorials? What role can I function in a company haven completed these your tutorial series?

  • @fauzisafina1827
    @fauzisafina1827 3 роки тому

    Thank you, sir, very useful, sir, may I request a video about how to convert the Tensorflow model to Tensorflow Lite

  • @1980chetansingla
    @1980chetansingla 3 роки тому +1

    Sir plz do on optimization video

  • @iitianvlogs9805
    @iitianvlogs9805 3 роки тому

    how to store pdf's converted text in mongoDB so that while extracting the search string it will also tell us in which pdf and which page no and which peragraph the required information is available.please help sir big fan

  • @Mathmagician73
    @Mathmagician73 3 роки тому +1

    Hardly waiting for new video 🤩🤩

  • @babachicken333
    @babachicken333 Рік тому

    dayumm he a gamer

  • @orgdejavu8247
    @orgdejavu8247 2 роки тому

    I can't find how to derive the Loss func. in respect to weight1 or 2,3etc, SO how the math is done behind the derivative part of the gradient descent formula. Everyone is showing the result or a smaller reshaped formula, but I would need the steps inbetween. An example where we backprop a single perceptron (with 1 or 2 weights, L2 function) would do it. Pls someone give me a link or a hand. Thanks!
    I do understand the essence of chain rule, but I wanna know dL/dw = dL/dPred * dPred/dw (dL/dw is the change in the weight1 respec to L2 Loss function, dPred is the derivative of the neuron's output (mx+b) )
    I don't get why the result is 2(target-pred) * input1.
    Is that because L2 is a squarefunction and the derivative of x^2 or error^2 is 2X , so 2error --> 2(target-pred), but then why the other parts of the formula disappear.

  • @MarshallDTeach-yr2ig
    @MarshallDTeach-yr2ig 3 роки тому +1

    🙏

  • @tooljerk666
    @tooljerk666 2 роки тому

    At time mark 12:53, what exactly do you do with 28? Say B is age. Is this saying with a change in B, we can expect z to be whatever age is plus 28?

  • @BatBallBites
    @BatBallBites 3 роки тому

    (y) chain rule very well explained

  • @vikramyadav-fe4vj
    @vikramyadav-fe4vj 2 місяці тому

    how u got the equations x= a2+7b and y = c3 +d and z=4x+3y ? plz someone explain?

  • @joyanbhathena7251
    @joyanbhathena7251 3 роки тому

    Why do we just substract in w1=w1-'something', shouldn't we add as well?

  • @orgdejavu8247
    @orgdejavu8247 2 роки тому +1

    can someone tell why it equals = area only at 5:26

  • @p1yush_meher
    @p1yush_meher Рік тому

    Shall we get the ppts??

  • @shah9366
    @shah9366 3 роки тому

    hi sir ,
    i am student of computer sciences & engineering in 3rd year right now, i want to developed my career in data sciences .what should i do?how i start? now i am in Gujarat what types of competitive exam i cracked? please help me? best university in india or world how i go there ? what types of exam they have? admission step?

  • @luccahuguet
    @luccahuguet 3 роки тому

    why doesnt it have a nonlinear activation function?

  • @mujamilkhan714
    @mujamilkhan714 3 роки тому +1

    Still how many tutorials are left for a complete deep learning series...?????

    • @karthikc8992
      @karthikc8992 3 роки тому +2

      Alot

    • @codebasics
      @codebasics  3 роки тому +3

      I agree. There will be many more videos 😊 and the most interesting thing would be deep learning projects 👍

    • @mujamilkhan714
      @mujamilkhan714 3 роки тому

      codebasics thanks for update Dhaval bhai I’m eagerly waiting for projects phase✌🏻

    • @vamsinadh100
      @vamsinadh100 3 роки тому

      @@codebasics Waiting!!!

  • @alok4289
    @alok4289 3 роки тому

    0:25 When you squeeze hour forehead, it makes the sign of lord mahadev 🔱

  • @aleenshrestha8119
    @aleenshrestha8119 2 роки тому

    where is the use off loss function .i didn't get in while u teaching gradient there is no use of looss function

  • @RakeshKambojVinayak
    @RakeshKambojVinayak 3 роки тому

    Sir, my question is related to gradient descent.
    why applied activation function only for gradient descent :- "sigmoid_numpy(weighted_sum) "
    not applied any activation function for stochastic and batch gradient descent:- github.com/codebasics/deep-learning-keras-tf-tutorial/blob/master/8_sgd_vs_gd/gd_and_sgd.ipynb
    Thanks for a great source of inspiration!

  • @d.s.5157
    @d.s.5157 7 місяців тому

    Why do you get -1 ?

  • @rajdipdas1329
    @rajdipdas1329 2 роки тому

    d/dx(z)=d/dz(z)*dz/dx
    this is the chain rule

  • @phoneix24886
    @phoneix24886 4 дні тому

    Should have followed your channel earlier before spending money on useless bootcamps to learn DL

  • @vishaljaiswar6441
    @vishaljaiswar6441 2 роки тому

    you are such a good teacher! those complex concept I never get understand them before watching your videos