PyTorch - Linear Regression implementation

Поділитися
Вставка
  • Опубліковано 6 жов 2024
  • Basic usage of PyTorch. From simple low-level usage of Adagrad to building up your neural networks with the model.nn module from PyTorch. In this video we look at how to implement a simple linear regression algorithm as a neural network.
    Notebooks: github.com/mad...
    PyTorch playlist: • PyTorch - The Basics
    Deep Learning introduction playlist: • Deep Learning: Part1 -...

КОМЕНТАРІ • 5

  • @kitanomegumi1402
    @kitanomegumi1402 2 роки тому

    great video! thank you

  • @AlexeyMatushevsky
    @AlexeyMatushevsky 3 роки тому +1

    Very nice video and well put together. I have a question - should we always call the optimizer.zero_frad() when training the model? Should the torch do it for us?

    • @DennisMadsen
      @DennisMadsen  3 роки тому +2

      Hi Alexei. Yes. The gradients should be zeroed in every iteration. Otherwise you get the accumulated gradient. This is useful for some applications. So pytorch just decided to make the user aware of this and forcing them to manually call the zero() in every iteration.

    • @AlexeyMatushevsky
      @AlexeyMatushevsky 3 роки тому

      @@DennisMadsen thank you for the explanation!

  • @anumhassan1422
    @anumhassan1422 Рік тому

    What if the weights get started negative with increasing iterations?