Hi patrick, I am really very grateful to you for this Pytorch tutorial. I am new in this world and your tutorials have helped me a lot to learn and perform in my work. 🥰🥰🥰
Could you do a "trial and error" video on how to approach figuring out how to shape tensors for each step? This is a nightmare to me. highly appreciated!
Thank you for your tutorial. I have a question that I appreciate if you could explain. What is exactly happening to the learning rate when we combine both Adam with a learning rate scheduler? In theory, Adam is an optimizer that uses adaptive learning rate, so it updates the learning rate on a parameter level. Hence, I don't understand exactly what is happening when we combine both. I would see the LR scheduler with a optimizer like SGD which has a constant LR.
I had the same question. I think here he's just used as example the Adam even though a more appropriate one would have been a SGD. But if you use it with Adam I guess the learning rate is divided both by the MA of the gradient and the decay you specify.
Depends on your use case but in general I would say using the validation loss based one (ReduceLROnPlateau) is a good start, since it only decreases your learning rate when your model is no longer learning, instead of just a static decrease based on epochs.
Hey, Momentum and the internal learning rate adaption of ADAM already impat the learning rate - why should we adjust the learning rate with a further external scheduler?
hi, im using ADAM optimizer and StratifiedKFold For some reason average training loss dont decrease in the last kfold, it is stuck in 0.6931 and 0.6932 I thought to increase learning rate when average training loss dont decrease: if n_epoch>0: if avg[n_epoch-1]>=avg[n_epoch]: optimizer = optim.Adam(model.parameters(),lr=learningRate*2) else: optimizer = optim.Adam(model.parameters(),lr=learningRate) is this code wrong? When using this i have other troubles...
can you create a tutorial to import a custom image dataset containing segmented and annoted images in coco format (with annotations in json file and images in a separate folder) and train it, using a backbone algorithm like resnet 50, and run it on some new images? I am facing issues in this kind of data importing for coco datasets.
Hi, you are a very nice teacher. Could you please make videos about Facebooks 'mmf' framework bases on pytorch. This could be a greate addition to you channel, please consider it. Thanks
bro please help /home/kash/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 i dont know what is this error, i cant access my cuda device
great course. i completely finished it. Thank you bro, you put a lot of knowledge into my head
Back to Pytorch💪
Hi patrick, I am really very grateful to you for this Pytorch tutorial. I am new in this world and your tutorials have helped me a lot to learn and perform in my work. 🥰🥰🥰
Love the accent and the instruction! Thanks.
🔥 you are best in explaining ....keep going bro...full support
thanks!
muy buen curso! gracias
very helpful patrick! I just ran into this kaggle project and your tutorial helps a lot!
Could you do a "trial and error" video on how to approach figuring out how to shape tensors for each step? This is a nightmare to me.
highly appreciated!
Thank You Sir. This all tutorial are very helpfull for me.
Thank you for your tutorial. I have a question that I appreciate if you could explain. What is exactly happening to the learning rate when we combine both Adam with a learning rate scheduler? In theory, Adam is an optimizer that uses adaptive learning rate, so it updates the learning rate on a parameter level. Hence, I don't understand exactly what is happening when we combine both. I would see the LR scheduler with a optimizer like SGD which has a constant LR.
I had the same question. I think here he's just used as example the Adam even though a more appropriate one would have been a SGD. But if you use it with Adam I guess the learning rate is divided both by the MA of the gradient and the decay you specify.
can you cover "attention models"?
thanks for the suggestion! Will put it on my list
Any suggestion on how to choose the most proper lr_scheduler among the given ones?
Depends on your use case but in general I would say using the validation loss based one (ReduceLROnPlateau) is a good start, since it only decreases your learning rate when your model is no longer learning, instead of just a static decrease based on epochs.
Very clear explanations. Thanks.
Great 👍 👌 👍
Great tutor
Glad you think so!
Hey, Momentum and the internal learning rate adaption of ADAM already impat the learning rate - why should we adjust the learning rate with a further external scheduler?
you are right. it's usually not necessary to use this with Adam. I should have used a different optimizer as example...
hi, im using ADAM optimizer and StratifiedKFold
For some reason average training loss dont decrease in the last kfold, it is stuck in 0.6931 and 0.6932
I thought to increase learning rate when average training loss dont decrease:
if n_epoch>0:
if avg[n_epoch-1]>=avg[n_epoch]:
optimizer = optim.Adam(model.parameters(),lr=learningRate*2)
else:
optimizer = optim.Adam(model.parameters(),lr=learningRate)
is this code wrong? When using this i have other troubles...
Nice video
can you create a tutorial to import a custom image dataset containing segmented and annoted images in coco format (with annotations in json file and images in a separate folder) and train it, using a backbone algorithm like resnet 50, and run it on some new images? I am facing issues in this kind of data importing for coco datasets.
It helped to me about the same issue: ua-cam.com/video/j-3vuBynnOE/v-deo.html
Sir how can we do audio processing in pytorch
Haven’t done anything with audio yet ...
Hi, you are a very nice teacher. Could you please make videos about Facebooks 'mmf' framework bases on pytorch. This could be a greate addition to you channel, please consider it. Thanks
thanks! I'll take a look at it
bro
please help
/home/kash/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
i dont know what is this error, i cant access my cuda device
which gpu and cudnn version do you use? Are you using a conda environment?