PyTorch - Linear Regression implementation
Вставка
- Опубліковано 6 жов 2024
- Basic usage of PyTorch. From simple low-level usage of Adagrad to building up your neural networks with the model.nn module from PyTorch. In this video we look at how to implement a simple linear regression algorithm as a neural network.
Notebooks: github.com/mad...
PyTorch playlist: • PyTorch - The Basics
Deep Learning introduction playlist: • Deep Learning: Part1 -...
great video! thank you
Very nice video and well put together. I have a question - should we always call the optimizer.zero_frad() when training the model? Should the torch do it for us?
Hi Alexei. Yes. The gradients should be zeroed in every iteration. Otherwise you get the accumulated gradient. This is useful for some applications. So pytorch just decided to make the user aware of this and forcing them to manually call the zero() in every iteration.
@@DennisMadsen thank you for the explanation!
What if the weights get started negative with increasing iterations?