04 PyTorch tutorial - How do computational graphs and autograd in PyTorch work

Поділитися
Вставка
  • Опубліковано 2 чер 2024
  • In this tutorial, we have talked about how the autograd system in PyTorch works and about its benefits. We also did a rewind of how the forward and backward pass work, and also how to calculate gradients.
    If you found this video helpful, please drop a like and subscribe. If you have any questions, write them in the comment section below.
    💻 Blog: datahacker.rs/004-computationa...
    🎞 UA-cam: / @datahackerrs
    🎞 Link for google collab script: colab.research.google.com/git...
    .
    .
    .
    Timestamps
    00:00 Introduction
    00:52 Computational graph forward pass
    01:54 Python forward pass
    02:25 Gradient calculation
    02:49 Python backward pass
    03:29 Turn off gradient calculation
    04:58 Gradient accumulation
    07:01 Chain rule
    08:17 Derivative calculation visualized
    10:15 .backward() function
    .
    .
    .
    .
    #machinelearning #artificialintelligence #ai #datascience #python #deeplearning #technology #programming #coding #bigdata #computerscience #data #dataanalytics #tech #datascientist #iot #pythonprogramming #programmer #ml #developer #software #robotics #java #innovation #coder #javascript #datavisualization #analytics #neuralnetworks #bhfyp
  • Наука та технологія

КОМЕНТАРІ • 12

  • @josecuevas5814
    @josecuevas5814 Рік тому +2

    Thank you so much for the video! A practical, worry free, and clear explanation.

  • @donfeto7636
    @donfeto7636 Рік тому +1

    Awesome Video, would like to see more in-depth like this I like you don't ignore math, please do it with the loss functions of any algorithm you like (better if it is related to deep learning or GAN, diffusion model ..)

  • @cristianarteaga
    @cristianarteaga 3 роки тому +2

    Thank you for such a great explanation!

  • @xflory26x
    @xflory26x 10 місяців тому +1

    Can you please elaborate on what you are talking about in the backward() function step? It's not very clear what is happening when you reset the gradients with z.backward(v)

  • @ashilshah3376
    @ashilshah3376 Рік тому

    Very nice explanation thank you

  • @harshpandya9878
    @harshpandya9878 3 роки тому +2

    great!!

  • @AlexeyMatushevsky
    @AlexeyMatushevsky 3 роки тому +1

    Very nice viedeo! Thank you !

    • @datahackerrs
      @datahackerrs  3 роки тому +1

      Thanks for the nice comment, glad you like it!

  • @user-ig3rp7fk9c
    @user-ig3rp7fk9c 8 місяців тому

    What about matrices, how do we handle the backward computations (derivatives) in matrices. Can you share any article or something. Thanks.

  • @bhavyabalan6225
    @bhavyabalan6225 2 роки тому +2

    Hi, great video and explanation. In the last part why we need to define the vector of ones and why we do the division by length of the vector? The explanation for that change in code is not clear for me

    • @yusun5722
      @yusun5722 2 роки тому

      The reason it needs a vector is because the gradients are calculated using the Jacobian matrix. A vector must be right multiplied with it to get the resultant gradients. Detail: pytorch.org/tutorials/beginner/introyt/autogradyt_tutorial.html
      For your second question, I think it's right NOT to divide by the length. If you manually calculate the derivative of dz/dx, it will be 20, 28 and 36.