Never seen a girl this intelligent. U taught in easy way. I am fan of you. From scratch like a real project. Guys this is how we approach for a real project in company. I was thinking ur videos are not understandable but i was wrong. 1 year back i didn’t have pytorch skills so. But last month i learned pytorch so its very easy for me. Thank you sso much for the video. Also one complement. Never seen a girl this intelligent ❤
To calculate the loss for a validation set in Faster R-CNN: Set the model to evaluation mode using model.eval(). Use torch.no_grad() to disable gradient computation during validation. Pass the validation data through the model and compute the loss: model.eval() with torch.no_grad(): for images, targets in val_loader: images = images.to(device) targets = [{k: v.to(device) for k, v in t.items()} for t in targets] loss_dict = model(images, targets) total_loss = sum(loss for loss in loss_dict.values())
Share more about the dataset like image size, how many images you have class wise etc. In general, fasterrcnn will give you more accuracy but it will be slow as compare to YOLO. Yolo will give you decent accuracy with speed.
You mean autolabelling. To perform autolabelling you need a pretrained model which can detect all these objects. Once objects on your dataset images are getting detected by this pretrained model then you can save there annotations in any format.
Thanks for the video, ma'am. I am facing some issues while implementing Faster RCNN on the PARAMSHAKTI supercomputer., My image dataset is big, so I can't run it locally or on Google Colab. I don't have internet access on PARAMSHAKTI, so I downloaded the weights locally and provided its path in the code, but then also it is asking for internet access. Further, I want to run the code for 100 epochs and want to compute mAP, precision, and recall for all 100 epochs. So, can you please help me in solving the issue?
Thank you for watching the video! 😊 I’ve never used the PARAMSHAKTI supercomputer but you can try this - If you're using a pretrained Faster R-CNN model, you can set pretrained=False and manually load the weights from your local path like this: from torchvision.models.detection import fasterrcnn_resnet50_fpn model = fasterrcnn_resnet50_fpn(pretrained=False) model.load_state_dict(torch.load("path_to_weights.pth")) This should bypass the need for internet access during the setup
Never seen a girl this intelligent. U taught in easy way. I am fan of you. From scratch like a real project. Guys this is how we approach for a real project in company. I was thinking ur videos are not understandable but i was wrong. 1 year back i didn’t have pytorch skills so. But last month i learned pytorch so its very easy for me. Thank you sso much for the video. Also one complement. Never seen a girl this intelligent ❤
Thank you for the kind words! I'm glad you found the video helpful. 😊
Good tutorial. Thanks for your effort.
You're welcome!
Keep it up Aarohi 👍👍
Fantastic way of explanation👏👏👏
Thank you! 🙂
this video was very detail and very helpful!
I'm glad you found it helpful! 😊
Amazing content as alwayz
Appreciate it!
Commendable efforts maam
Thanks a lot
Exceptional stuff
finally, the pytorch
Yes :)
Well done Aarohi! Also shows some examples for sound classification , love from Lahore Pakistan
Thanks!
Welcome!
maam can you make it using google colab, because not everyone can test with their own laptop because they are not able to do it.
thank you Aarohi, very informative and to the point. Please would you enhance with a video on hyper-parameter tuning using Optuna.
Thanks, Sure!
Very helpful video, however, I have a question. How, if possible, can one calculate the loss for a validation set?
To calculate the loss for a validation set in Faster R-CNN:
Set the model to evaluation mode using model.eval().
Use torch.no_grad() to disable gradient computation during validation.
Pass the validation data through the model and compute the loss:
model.eval()
with torch.no_grad():
for images, targets in val_loader:
images = images.to(device)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
loss_dict = model(images, targets)
total_loss = sum(loss for loss in loss_dict.values())
❤
I want to do object detection on slot games which is a unique dataset.
Which would be better to use Faster R-CNN vs Yolo.
I'm interested in accuracy.
Share more about the dataset like image size, how many images you have class wise etc. In general, fasterrcnn will give you more accuracy but it will be slow as compare to YOLO.
Yolo will give you decent accuracy with speed.
Ma;am, is it possible , object detection tracking and frame classification, all in one framework
Yes, you can do classification, detection and tracking in all the frames of the video.
How to train the model and add labels to this kind of dataset programmatically?
You mean autolabelling. To perform autolabelling you need a pretrained model which can detect all these objects. Once objects on your dataset images are getting detected by this pretrained model then you can save there annotations in any format.
@@CodeWithAarohi Can u make a video for saving the annotations from any prediction of model
Thanks for the video, ma'am. I am facing some issues while implementing Faster RCNN on the PARAMSHAKTI supercomputer., My image dataset is big, so I can't run it locally or on Google Colab. I don't have internet access on PARAMSHAKTI, so I downloaded the weights locally and provided its path in the code, but then also it is asking for internet access. Further, I want to run the code for 100 epochs and want to compute mAP, precision, and recall for all 100 epochs. So, can you please help me in solving the issue?
Thank you for watching the video! 😊 I’ve never used the PARAMSHAKTI supercomputer but you can try this - If you're using a pretrained Faster R-CNN model, you can set pretrained=False and manually load the weights from your local path like this:
from torchvision.models.detection import fasterrcnn_resnet50_fpn
model = fasterrcnn_resnet50_fpn(pretrained=False)
model.load_state_dict(torch.load("path_to_weights.pth"))
This should bypass the need for internet access during the setup
@CodeWithAarohi Thank you for your response ma'am
hi arohi plz tell where i can get VIA Annotator
@@rahulbhole1575 open browser and search vgg annotator. Download zip file
Well done Aarohi! Also shows some examples for sound classification , love from Lahore Pakistan
Thank you! Sure, I will