Hello Mr. Rob , I have watched many videos of yours just to try to make this work, I already installed linux ubuntu, Anaconda, cloned yolov5 and installed the requirements successfully, then when coming to the command : lc -tr I dont get the same result as yours, I only get this result of three lines: Mono license compiler copyright (c) 2009 by remobjects software No target/complist passed Have been working for 10 hours and I am still stuck in the first 3 minutes, can you please help me
Hey Rob, thank you SO much for this tutorial!! I had been stuck with image detection for quite sometime as many tutorials which I followed skipped certain parts like setting up environments and stuff. But you, my man, you took us step by step through your tutorials! I truly appreciate it and keep up the good work!! Thank you once again!
Mr Rob have been training a Yolov4 Model and was wondering if i can use the Dataset i used to train my Yolov4 on Yolov5 or even Yolov7 or do i have to re-annotate all the images into a proper new format ?
I love how the model detected the traffic light in the bonnet at 10:16. Possibly dangerous if it was detecting something upside down in a reflection. I actually didn't know about doing pip install -r requirements.txt. Always learn the most random tricks from other people.
WARNING! ELON MUSK SHOULD NOT USE THIS MODEL TO MAKE SELF DRIVING CARS!!! :D Just kidding (but also not kidding). Remember the video at that timestamp what when we made the detection confidence threshold very low and the IoU threshold very high - so it will over predict a lot of false positives. Thanks for watching!
Hi, I want the audible warning or warning system to work in any program interface when only people are detected from the detected objects. What method should I try for this?
I'm not sure about the audible part, but you could easily make a python program that checks a video stream to see if people are detected. Playing the sound would depend on your operating system. Check out the top response to this SO post: stackoverflow.com/questions/16573051/sound-alarm-when-code-finishes
Hi Rob!! Great video. But I have few questions. 1. How do we train yolov5 model on a custom dataset? For eg: I have a custom dataset of Amazon products, how can I train the model on this dataset? 2. Also, how can I send the prediction results to the front end? I am a backend engineer and is developing a web app for object detection. I know we can send text data in json format, but how do we send the camera feed along with the prediction results to the front end to display?
Great video Rob! How can I take all the object's name after processing and embed those in the video file as metadata? I have a bunch of videos and photos that I want to run this so that I can find them later by searching through tags
That's a good question. You could certainly modify the code to aggregate the labels, or use the output of the txt file to post process. I'm not sure of the best way to add metadata to video though.
Hey, I'm having trouble doing pip install -r requirements.txt. I am wondering if there's any required dependencies. if not, can I manually install the programs in the requirement.txt file?
Hi Rob. Thank you very much for this tutorial. I'm curious to know why you choose YOLOv5 instead of a later version (isn't 7 now released?) and plan on following your example using the version you suggest.
Great point! At the time I released this yolov5 was the most commonly used. Even though it’s named yolov7 the open source version is not made by the same people who created v5. I want to eventually make a video showing the v7 version.
Thank you Rob@@robmulla for letting me know this. I look forward to learning the differences in working with v7 and whether it can deliver significantly better results. This assumes we can expect v7 to receive updates and maintenance. Otherwise using v5 may be the best option if it's the main version and better supported and much more commonly used. I think v7 should be renamed to YOLO-derivative-v7.
Hi Rob! Nice video. Is there a way to optimize GPU VRAM utilization rather than modifying the batch size? In my experience most of object detection projects usually require tons of VRAM, even an AWS p2.xlarge seem small for these kind of tasks
Great question. Honestly I’m not the best person to answer this question. Like most things you can only optimize for speed up to the limitations of the hardware. Usually if I’m running out of gpu ram then batch size needs to be lowered. The issue with VRAM might be unavailable because the images need to be loaded into memory before predicting. What type of images are you running on? I’ve run on 720p with fairly large batch sizes and no issues.
Thanks! The default model is trained on the COCO dataset with some common labels. We can train a model that predicts custom objects. I will make a video soon that shows how that can be done.
I know only yolo is image detection ,from your video i got impressed i really want to do but i dont know what interface have u done this and the codes , can u please expalin this step by step 🥺
I'm glad you are excited. I am using a linux machine and running everything in the command line. It should be similar with windows but I'm not as familiar with how to do it. Try using powershell and installing anaconda first. Good luck!
I don't believe yolo can change the size of the video- but it can be done with cv2 in python. Look up "opencv python resize video and modify speed" speed is determined by FPS. Hope that helps.
@@robmulla A suggestion: You are very good at teaching. But this tutorial is little too fast. I had no idea when the terminal shows ' git not recognized as internal or external command. Then after long time I installed the git software. If you have mentioned the git installation that would have helped us alot. And at some point the video is going very fast. So please keep it in mind for future videos. But still u are a teacher for me, thankyou so much for the video....❤️❤️❤️
@@dani9609 I really apprecaite the feedback! I didn't think about how that might be confusing but am happy you mentioned it. Maybe I should make a different video about git. I'll try to do better in the next one.
how to make the algorithm work in such a way that it will only detect and counts cups that you are holding and wont detect other objects like glasses face etc
Hmm. You might need to train a custom detector. But something like that can be hard because it involves more that just detection and the model needs to know the surrounding contex in the image.
Hi, really great video! So while trying to do it myself, I ran into a bunch of "module not found" problems even though I did run requirements.txt. I manually installed those, however seem to be stuck at "no module named yaml", any help would be much appreciated.
Hi sir, i have a question please, how to make object detection that can explain the view . like i want my project to say "man sitting on a chair" or "man man holding a teddy pair" , until now what i did is making it say just the object in front of the camera , and i don't know how to make it explain the view. is that is something about training my own dataset on sentences ? can any one please help me on that ? i am using yolov3 and coco dataset and pyttsx3 for the voice feedback
Wow, that sounds like a really cool project. Definately outside my knowledge with regards to object detection. Maybe look into something like this? www.analyticsvidhya.com/blog/2021/12/step-by-step-guide-to-build-image-caption-generator-using-deep-learning/
Hi Rob! Thanks for the brilliant explanation about YOLO. Rob, can you please tell me how I can write the total object detected in my webcam frame in real time for example in your scenario you are detected so I want to write on the left side that 1 object is detected. Is it possible? TIA.
Yes, this is totally possible but would require working with the base code. If you look in the detect.py the detections are saved as "det" github.com/ultralytics/yolov5/blob/master/detect.py#L136 The code currently shows the box with detections but you could modify it to display just a box with text for the objects detected. Hope that helps.
@@robmulla Thanks Alot Rob, You're an amazing human. Its a request when ever you get enough time try to make a video on how to write something in live frame as I described you before because I've search alot this topic on internet but I didn't get any right answer or direction for this issue. I'm so thankful to you.
Interesting, sir. I need to read more about it, but if you have time to share your knowledge with a fool then I'd liek to ask about: 1) Can we specify "what" we want to detect in the image? Let's say I'd like to detect only people and cars, not cellphones, traffic lights, kites, planes, etc. 2) Is it possible to receive bounding box coordinates to the .txt file like TOP_LEFT_X, TOP_LEFT_Y, WIDTH, HEIGHT for each detected object? I think it is, but would be cool to have confirmation before the research, thanks!
Great questions @Vislone you are on the right track. 1. The base model was trained on the default COCO labels. Just google "COCO label list". You can however, train a model on anything if you have enough labeled images. I plan to still make a video about that. 2. Yes, the bounding boxes can be writted to a file OR you can go into the actualy detect.py and see how the code processes and store them in a different way. Good luck!
Heyy . Could you please help me out with this issue i am fixing with cv2. "WARNING Environment doesn't support cv2.imshow() or PIL image.show() "on anaconda.I can do whatever it takes to work this out. Please guide me through.The detection is taking place,but the video isn't showing up .
Hey. Thanks for watching the video. I know someone else mentioned the same issue. I see there is an active discussion about it on the yolo github page, you might want tor read that here: github.com/ultralytics/yolov5/issues/9844
Hello, whare can I find the python terminal? I just started learning python and I just downloaded it I don't know how to open the terminal where is it? Thank you
Welcome to the wonderful world of python! I would reccomend starting with installing anaconda first from www.anaconda.com/ After that is installed, if you are running on windows, you can load the terminal by searching for the "anaconda prompt". If you are running a mac you can search for "terminal". Hope that helps!
Thanks for watching! That's a very specific request :D I can't garuntee I will do that but have you watched my other video about custom training. You might find it helpful: ua-cam.com/video/RXbtSwZsoEU/v-deo.html
Great video. I've trained a german traffic sign recognition benchmark dataset using yolov5. I have used batch size of 128 and 300 epochs. I've also tried with batch size 64 and 100 epochs. However, it is not able to detect at a distance. It can only detect when the traffic sign is very very close to the camera. Any idea of what I did wrong?
That's great that you've trained your own model. I plan to make a video about the training process in the future. The problem you talk about is pretty common. What people sometimes do is train a two-stage detector. The first detector gets an idea for the scale of the objects and then the second predicts on the rescaled version. Also, when training you can augment your labeled images to be varying sizes so yolo doesn't overfit to the large signs. Of course if the sign is extremely small the model will always have a difficult time detecting. Hope that helps.
Sorry to hear that. Maybe paste the error message here? Are you using anaconda? Another way to test it out without having to do the setup would be to use something like google colab or kaggle notebooks - that will have everything preinstalled (except yolo). Those won't work with a local webcam though. Usually when I get errors I copy and paste them into google and 99% of the time someone has posted about the same problem on stack overflow.
@@robmulla okay, switching over to google colab. i don't need to use it on a webcam right now anyway. so all i do is clone the repo in the beginning, and then do i try to install the requirements.txt file? cuz it's not working in colab.
import torch ModuleNotFoundError: No module named 'torch' Some help here I tried everything, I installed manually the module, reboot computer but still this error appear
It must not be installed correctly or you are trying to run from a different environment, because otherwise it should be found when importing. Try running your code in a kaggle notebook to double check maybe?
detect: weights=yolov5s.pt, source=0, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 🚀 v6.2-228-g6ae3dff Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients WARNING ⚠ Environment does not support cv2.imshow() or PIL Image.show() [ WARN:0@5.409] global /io/opencv/modules/videoio/src/cap_v4l.cpp (902) open VIDEOIO(V4L2:/dev/video0): can't open camera by index Traceback (most recent call last): File "detect.py", line 258, in main(opt) File "detect.py", line 253, in main run(**vars(opt)) File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "detect.py", line 103, in run dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) File "/content/yolov5/utils/dataloaders.py", line 364, in __init__ assert cap.isOpened(), f'{st}Failed to open {s}' AssertionError: 1/1: 0... Failed to open 0
You should be able to run it in colab, but I believe you are detting an error because your webcam is not going to be connected to the instance. You will need to run it on a video file.
Hi - is it possible to use YOLO to search for objects, then read text off those objects? i.e.: when viewing a card from a certain card game, I want to then extract the text off that card
when running detect.py you can add the parameter "--save-txt" which will save the output into a text file. The first column in that file will be the class labels associated with the COCO dataset: tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
@@robmulla no I mean that I am using the object detection model on my laptop jupyter notebook(offline) and I want to access the class name so that I can further send the data to other areas For example if the object detected is Car then "Car" class name would be stored into a variable,(let's assume we have to store class name in variable x ) so what will be the code for that And Sir thanks for the reply
I don’t think that YOLO would be the best thing to use for this unless you are looking for specific text like a STOP sign. You might want to look into OCR techniques.
Hi, thank you for this upload. I had a doubt about increasing the accuracy, how does one do so? because right now its detecting a squirrel as a giraffe 😭
Thanks for watching. Great question. To make the model better at predicting specific objects it’s best to train or “fine tune” the model on a dataset of additional labeled images. I plan to make a video about this process. In your example you would need to train on a giraffe or squirrel specific dataset.
@@robmulla yep ,That what i Found , raise NotImplementedError(f'ERROR: {w} is not a supported format') NotImplementedError: ERROR: yolov5x is not a supported format
Thanks for the feedback Richard. What do you mean by running from a script exactly. The code that I ran directly from the yolov5 repo is essentially a script. I was thinking that depending on how popular this video is I could make follow up videos showing how to train yolov5 on a custom dataset and applying it so that the prediction boxes are stored.
@@robmulla When I said python script I meant implementing YOLO detection in custom script that did a triggered operation depending on the detection/labels. if label = 'car' save frame as image with label as filename, or trigger something else to occur like turn on outside lights.
i need your help for complet my project idea i have 3 image in my all image i have draw a cricle and in cricle i give text = image some where i replace shape cricle with another shape i want to add another image on this cricle i need help for ditecting that shape, pozition size after ditecting this all things place second image in to main image acording to ditection and save it i cant write this code please help me ..... i can share my image with you if you want please give me some idea or some ditection cod
is it possible to detect unwanted weeds among crops and then remove it through a robot? Basically, what i am asking is it possible to combine iot and image processing
hi , it was really great video thank you so much for your effort , so i done it but the video is so so slow while running and i don't know why this is happening
don’t use your camera unless you have to , i get issues too, switched to just detect the objects on my screen instead , still using the camera just doesn’t detect through the camera lol detects off of my screen
heyy. i founf your video very usefull. I would be gald if you could help me with the following error . I am facing it while running the same code. "Environment does not support cv2.imshow() or PIL Image.show()". I hope you could help me out As soon as possible.
Thanks! Are you running on a system that doesn’t have a monitor directly connected like a remote server? You might need to disable the image output and instead store the results to a file using the appropriate flags. Hope that helps.
@@robmulla it's all a laptop that i am using. The objects are being detected ,it's just that the video isn't playing.Anyways thanks for helping me out.🙂
,def main(opt): Executes YOLOv5 model inference with given options, checking requirements before running the model.""" check_requirements(ROOT / "requirements.txt", exclude=("tensorboard", "thop")) run(**vars(opt)) if __name__ == "__main__": opt = parse_opt() main(opt) im getting error plse😢 help me
hey sir i am doing a project related to object detection using YOLOv5. if you don't mind can you help me on how to integrate a voice output for this object detection. as soon as possible please...!!!!!!!!!!!!!
Hey there, thank you for providing a step by step process of getting it done. I've managed to open it up using my webcam. However, I am unable to open up a mp4 or image. The error that I received is: raise ReaderError(self.name, position, ord(character), yaml.reader.ReaderError: unacceptable character #x0000: special characters are not allowed But then again, I have searched online and can't seem to resolve this issue. Would appreciate if you have any feedback on this. Cheers.
Want to train a custom object detector from scratch? Check out my video here: ua-cam.com/video/RXbtSwZsoEU/v-deo.html
Yes i do want to know thank u sir
How can I use the GPU instead of cpu for running this
Hello Mr. Rob , I have watched many videos of yours just to try to make this work, I already installed linux ubuntu, Anaconda, cloned yolov5 and installed the requirements successfully, then when coming to the command : lc -tr
I dont get the same result as yours, I only get this result of three lines:
Mono license compiler
copyright (c) 2009 by remobjects software
No target/complist passed
Have been working for 10 hours and I am still stuck in the first 3 minutes, can you please help me
I remember asking chatgpt and he solved it easily
@VuAnh-ey3bi
Hi Rob, would you be prepared to work on a new vision project running on raspberry pi and audinocam? $$$
Hey Rob, thank you SO much for this tutorial!! I had been stuck with image detection for quite sometime as many tutorials which I followed skipped certain parts like setting up environments and stuff. But you, my man, you took us step by step through your tutorials! I truly appreciate it and keep up the good work!! Thank you once again!
bhai hua aapka ache se..?
Things like this always convince me I’m on the right path. Thanks 🙏🏼
I loved the way that you answered everyone in here. Great content.
Great Channel
I subscribed.
Thanks for the sub! Spread the word :D
Great video, Im currently learning python and cool videos like these motivate me to keep on learning
I love feedback like this Chris- it motivates me! So thanks for sharing it. Congrats on starting learning python, you’re going to love it.
Thanks a lot for the tutorial! I've been trying to figure out ML for some time now and this video helped a lot.
Glad it helped!
fingers are considered as carrots - that was the most impressive AI humour :)
Mr Rob have been training a Yolov4 Model and was wondering if i can use the Dataset i used to train my Yolov4 on Yolov5 or even Yolov7 or do i have to re-annotate all the images into a proper new format ?
Awesome video as always!!!
Thanks for watching! Tell your friends 😊
I love how the model detected the traffic light in the bonnet at 10:16. Possibly dangerous if it was detecting something upside down in a reflection. I actually didn't know about doing pip install -r requirements.txt. Always learn the most random tricks from other people.
WARNING! ELON MUSK SHOULD NOT USE THIS MODEL TO MAKE SELF DRIVING CARS!!! :D Just kidding (but also not kidding). Remember the video at that timestamp what when we made the detection confidence threshold very low and the IoU threshold very high - so it will over predict a lot of false positives. Thanks for watching!
Thank you so much for the step by step guide!
bhai tmne banay h kya
Hi, I want the audible warning or warning system to work in any program interface when only people are detected from the detected objects. What method should I try for this?
I'm not sure about the audible part, but you could easily make a python program that checks a video stream to see if people are detected. Playing the sound would depend on your operating system. Check out the top response to this SO post: stackoverflow.com/questions/16573051/sound-alarm-when-code-finishes
Thanks For Sharing! Great Education!
Hi Rob! Thanks in detail explanation about yolo.
Thanks for watching! I also have a video on custom object detection with yolov7! ua-cam.com/video/RXbtSwZsoEU/v-deo.html
The God is back! Thanks for sharing all this knowlege with us!
Just a mere mortal here! Hope you find the video helpful.
great video! How can I do this with my own data set of images and labels? Or add on to the current dataset
thank you for the video, it is very informative and helpful.
Hi Rob!! Great video. But I have few questions.
1. How do we train yolov5 model on a custom dataset? For eg: I have a custom dataset of Amazon products, how can I train the model on this dataset?
2. Also, how can I send the prediction results to the front end? I am a backend engineer and is developing a web app for object detection. I know we can send text data in json format, but how do we send the camera feed along with the prediction results to the front end to display?
Can you provide some information if have completed it
What should be the command for enable laptop camera ?
Great video Rob! How can I take all the object's name after processing and embed those in the video file as metadata? I have a bunch of videos and photos that I want to run this so that I can find them later by searching through tags
That's a good question. You could certainly modify the code to aggregate the labels, or use the output of the txt file to post process. I'm not sure of the best way to add metadata to video though.
Hey, I'm having trouble doing pip install -r requirements.txt. I am wondering if there's any required dependencies. if not, can I manually install the programs in the requirement.txt file?
Hi Rob. Thank you very much for this tutorial. I'm curious to know why you choose YOLOv5 instead of a later version (isn't 7 now released?) and plan on following your example using the version you suggest.
Great point! At the time I released this yolov5 was the most commonly used. Even though it’s named yolov7 the open source version is not made by the same people who created v5. I want to eventually make a video showing the v7 version.
Thank you Rob@@robmulla for letting me know this. I look forward to learning the differences in working with v7 and whether it can deliver significantly better results. This assumes we can expect v7 to receive updates and maintenance. Otherwise using v5 may be the best option if it's the main version and better supported and much more commonly used. I think v7 should be renamed to YOLO-derivative-v7.
Hi Rob! Nice video. Is there a way to optimize GPU VRAM utilization rather than modifying the batch size? In my experience most of object detection projects usually require tons of VRAM, even an AWS p2.xlarge seem small for these kind of tasks
Great question. Honestly I’m not the best person to answer this question. Like most things you can only optimize for speed up to the limitations of the hardware. Usually if I’m running out of gpu ram then batch size needs to be lowered. The issue with VRAM might be unavailable because the images need to be loaded into memory before predicting. What type of images are you running on? I’ve run on 720p with fairly large batch sizes and no issues.
So cool ! Is it easy to custom with other dataset ? As it is already train, how many data we need in this new dataset ?
Thanks! The default model is trained on the COCO dataset with some common labels. We can train a model that predicts custom objects. I will make a video soon that shows how that can be done.
@@robmulla perfect!
Hi Rob! Nice Video. Is there any way to detect the object which is not detectable by YOLOv5 like 'zebra crossing'?
Hi! great video btw! Is there a way we can save the output to a text file?
Thanks for the feedback. Yes it’s an option in the detect.py file to save the results to a text file. You just need to set the flag.
I know only yolo is image detection ,from your video i got impressed i really want to do but i dont know what interface have u done this and the codes , can u please expalin this step by step 🥺
I m using windows and using python 3.10 is it ok can plse do from this
I'm glad you are excited. I am using a linux machine and running everything in the command line. It should be similar with windows but I'm not as familiar with how to do it. Try using powershell and installing anaconda first. Good luck!
If i used this in cmd power shell will i get the output too??
Edit: i have used but the lc -tr to read the files is not working
@@user-uz6bf7op4f I think so
Hi, thnx for the effort! How can I change the size of the video and the speed!
I don't believe yolo can change the size of the video- but it can be done with cv2 in python. Look up "opencv python resize video and modify speed" speed is determined by FPS. Hope that helps.
can u create soething like var like goal line technology or off side
Thank you ❤️
You’re welcome 😊
@@robmulla A suggestion: You are very good at teaching. But this tutorial is little too fast. I had no idea when the terminal shows ' git not recognized as internal or external command. Then after long time I installed the git software. If you have mentioned the git installation that would have helped us alot. And at some point the video is going very fast. So please keep it in mind for future videos.
But still u are a teacher for me, thankyou so much for the video....❤️❤️❤️
@@dani9609 I really apprecaite the feedback! I didn't think about how that might be confusing but am happy you mentioned it. Maybe I should make a different video about git. I'll try to do better in the next one.
how to make the algorithm work in such a way that it will only detect and counts cups that you are holding and wont detect other objects like glasses face etc
Hmm. You might need to train a custom detector. But something like that can be hard because it involves more that just detection and the model needs to know the surrounding contex in the image.
@@robmulla Wow!!!!! I never thought that you would reply back. Thnx a lot for your helpful suggestion.
Awesome video! Do you know a way to save the detections? I need them to make a counter of how many people it detects real-time. Thanks!
4:41 in the arguments, see --save-txt. Saves the results '.txt'
Thats cool!! how can i use this to track my soccer video and collect data? cheers
You could probably do that. You might want to check out my other video on training a custom object detector using yolo. Good luck!
hey Rob, can you help us out for the object detection task in c++ using yolov5, opencv and onnx?
What if we have a video and we want to get the count of objects within a certain region (the 'Area of Interest') only?
This is very impressive
Hi, really great video! So while trying to do it myself, I ran into a bunch of "module not found" problems even though I did run requirements.txt. I manually installed those, however seem to be stuck at "no module named yaml", any help would be much appreciated.
Thanks for the feedback. You might need to `pip install yaml` if the module is not found it usually needs to be pip installed. Hope that helps.
What is the terminal that you use
Mate terminal in ubuntu
Hi sir, i have a question please, how to make object detection that can explain the view . like i want my project to say "man sitting on a chair" or "man man holding a teddy pair" , until now what i did is making it say just the object in front of the camera , and i don't know how to make it explain the view. is that is something about training my own dataset on sentences ? can any one please help me on that ? i am using yolov3 and coco dataset and pyttsx3 for the voice feedback
Wow, that sounds like a really cool project. Definately outside my knowledge with regards to object detection. Maybe look into something like this? www.analyticsvidhya.com/blog/2021/12/step-by-step-guide-to-build-image-caption-generator-using-deep-learning/
How can I find a ready-made template so I can count the number of people for my project? in the Roboflow????
Hi Rob! Thanks for the brilliant explanation about YOLO.
Rob, can you please tell me how I can write the total object detected in my webcam frame in real time for example in your scenario you are detected so I want to write on the left side that 1 object is detected.
Is it possible?
TIA.
Yes, this is totally possible but would require working with the base code. If you look in the detect.py the detections are saved as "det" github.com/ultralytics/yolov5/blob/master/detect.py#L136
The code currently shows the box with detections but you could modify it to display just a box with text for the objects detected. Hope that helps.
@@robmulla Thanks Alot Rob,
You're an amazing human.
Its a request when ever you get enough time try to make a video on how to write something in live frame as I described you before because I've search alot this topic on internet but I didn't get any right answer or direction for this issue.
I'm so thankful to you.
Hello do you thing Yolo World better than Google cloud ones?
hi how can turn the camera to tello drone camera?
Not sure what you mean. You would need the camera to be a source on your PC
which terminal i want to use in windows
Usse the Git Bash, im using and its working
Interesting, sir.
I need to read more about it, but if you have time to share your knowledge with a fool then I'd liek to ask about:
1) Can we specify "what" we want to detect in the image? Let's say I'd like to detect only people and cars, not cellphones, traffic lights, kites, planes, etc.
2) Is it possible to receive bounding box coordinates to the .txt file like TOP_LEFT_X, TOP_LEFT_Y, WIDTH, HEIGHT for each detected object?
I think it is, but would be cool to have confirmation before the research, thanks!
Great questions @Vislone you are on the right track.
1. The base model was trained on the default COCO labels. Just google "COCO label list". You can however, train a model on anything if you have enough labeled images. I plan to still make a video about that.
2. Yes, the bounding boxes can be writted to a file OR you can go into the actualy detect.py and see how the code processes and store them in a different way.
Good luck!
@@robmulla Thank you Sir :)
Heyy . Could you please help me out with this issue i am fixing with cv2. "WARNING Environment doesn't support cv2.imshow() or PIL image.show() "on anaconda.I can do whatever it takes to work this out. Please guide me through.The detection is taking place,but the video isn't showing up .
Hey. Thanks for watching the video. I know someone else mentioned the same issue. I see there is an active discussion about it on the yolo github page, you might want tor read that here: github.com/ultralytics/yolov5/issues/9844
@@robmulla THANKS A LOT!!!!!!. I got it fixed. Once again thanks for helping me out.
Hello, whare can I find the python terminal? I just started learning python and I just downloaded it I don't know how to open the terminal where is it? Thank you
Welcome to the wonderful world of python! I would reccomend starting with installing anaconda first from www.anaconda.com/
After that is installed, if you are running on windows, you can load the terminal by searching for the "anaconda prompt". If you are running a mac you can search for "terminal".
Hope that helps!
How do you get the camera to run so fast? My detection runs at around 3-4 frames per second? Is it a computer spec thing?
Can you please make a video on weapon detection also...I am planning to do that and want some guidance
Thanks for watching! That's a very specific request :D I can't garuntee I will do that but have you watched my other video about custom training. You might find it helpful: ua-cam.com/video/RXbtSwZsoEU/v-deo.html
Thank you for the reference…will look into it
替换数据集即可。
Great video. I've trained a german traffic sign recognition benchmark dataset using yolov5. I have used batch size of 128 and 300 epochs. I've also tried with batch size 64 and 100 epochs. However, it is not able to detect at a distance. It can only detect when the traffic sign is very very close to the camera. Any idea of what I did wrong?
That's great that you've trained your own model. I plan to make a video about the training process in the future. The problem you talk about is pretty common. What people sometimes do is train a two-stage detector. The first detector gets an idea for the scale of the objects and then the second predicts on the rescaled version. Also, when training you can augment your labeled images to be varying sizes so yolo doesn't overfit to the large signs. Of course if the sign is extremely small the model will always have a difficult time detecting. Hope that helps.
Hey! Im on WSL and it wont access my webcam. Is this because im in WSL? Do you know why? Or a workaround?
Is it the same installation for windows
What’s the name of the terminal your using ?
This is mate terminal in ubuntu. I have a whole video on my setup you should checkout. ua-cam.com/video/TdbeymTcYYE/v-deo.html
the requirements aren't installing for some reason, mainly the scipy packages PLZ HELP
Sorry to hear that. Maybe paste the error message here? Are you using anaconda? Another way to test it out without having to do the setup would be to use something like google colab or kaggle notebooks - that will have everything preinstalled (except yolo). Those won't work with a local webcam though. Usually when I get errors I copy and paste them into google and 99% of the time someone has posted about the same problem on stack overflow.
@@robmulla okay, switching over to google colab. i don't need to use it on a webcam right now anyway. so all i do is clone the repo in the beginning, and then do i try to install the requirements.txt file? cuz it's not working in colab.
How can I calculate metrics of this model?
like mAP, IoU and confusion matrix
Thanks for the video.
Great question. I didn’t go into detail about them but implementations of them can be found on GitHub or in the yolo source code.
How we make it work for our own custom object?
Great question. I actually have a video that discusses this in detail. Check it out here: ua-cam.com/video/RXbtSwZsoEU/v-deo.html
@@robmulla this is perfect, i really needed this. Thanks a lot.
Can we do Traffic sign detection with yolov5
Absolutely! You just need a training dataset.
bruh after install weights it detect perfect but its literaly slow can you help me?
import torch
ModuleNotFoundError: No module named 'torch'
Some help here I tried everything, I installed manually the module, reboot computer but still this error appear
It must not be installed correctly or you are trying to run from a different environment, because otherwise it should be found when importing. Try running your code in a kaggle notebook to double check maybe?
I had this error, and I run it with python 3.10 instead of 3.11, you should try this
@@theophanedebellabre2676 ok thanks I would for sure
Hi! Can we run this in Google Colab?
detect: weights=yolov5s.pt, source=0, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 v6.2-228-g6ae3dff Python-3.7.15 torch-1.12.1+cu113 CUDA:0 (Tesla T4, 15110MiB)
Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
WARNING ⚠ Environment does not support cv2.imshow() or PIL Image.show()
[ WARN:0@5.409] global /io/opencv/modules/videoio/src/cap_v4l.cpp (902) open VIDEOIO(V4L2:/dev/video0): can't open camera by index
Traceback (most recent call last):
File "detect.py", line 258, in
main(opt)
File "detect.py", line 253, in main
run(**vars(opt))
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "detect.py", line 103, in run
dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
File "/content/yolov5/utils/dataloaders.py", line 364, in __init__
assert cap.isOpened(), f'{st}Failed to open {s}'
AssertionError: 1/1: 0... Failed to open 0
I ran it and came across this error. Your suggestions please?
You should be able to run it in colab, but I believe you are detting an error because your webcam is not going to be connected to the instance. You will need to run it on a video file.
@@robmulla thanks for the response. Yeah, it is accessing the webcam through colab. Is there any way to connect to the webcam?
not*
Hi - is it possible to use YOLO to search for objects, then read text off those objects?
i.e.:
when viewing a card from a certain card game, I want to then extract the text off that card
there are other libraries that specialize in text recognition such as tesseract, but I am sure you could use yolo as well.
how to count and store results in webcam
When running detect.py use the --save-txt flag to save the results in a text file.
Hi sir this video was very helpful but how can I get the detected object name in a variable for further function
Can you plz tell me the code
when running detect.py you can add the parameter "--save-txt" which will save the output into a text file. The first column in that file will be the class labels associated with the COCO dataset: tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
@@robmulla no I mean that I am using the object detection model on my laptop jupyter notebook(offline) and I want to access the class name so that I can further send the data to other areas
For example if the object detected is
Car then "Car" class name would be stored into a variable,(let's assume we have to store class name in variable x ) so what will be the code for that
And Sir thanks for the reply
How do I save the new file
sir , can you please do yolo object detection with distance and voice feedback
Interesting idea. I’ll have to think about it.
Is there a way to run this in a single python file ? @Rob Mulla
How to extract text from image using yolov5 and store that text somewhere
I don’t think that YOLO would be the best thing to use for this unless you are looking for specific text like a STOP sign. You might want to look into OCR techniques.
how can i show five videos at a time on the same screen ?plz help sir
Hi, thank you for this upload. I had a doubt about increasing the accuracy, how does one do so? because right now its detecting a squirrel as a giraffe 😭
Thanks for watching. Great question. To make the model better at predicting specific objects it’s best to train or “fine tune” the model on a dataset of additional labeled images. I plan to make a video about this process. In your example you would need to train on a giraffe or squirrel specific dataset.
Does it work in windows
Error in detect video file (yolov5 not supported format)
Are you sure you are referencing the correct file/location?
@@robmulla yep ,That what i Found , raise NotImplementedError(f'ERROR: {w} is not a supported format')
NotImplementedError: ERROR: yolov5x is not a supported format
Great video. How about a video tutorial implementing YOLOv5 with a python script?
Thanks for the feedback Richard. What do you mean by running from a script exactly. The code that I ran directly from the yolov5 repo is essentially a script. I was thinking that depending on how popular this video is I could make follow up videos showing how to train yolov5 on a custom dataset and applying it so that the prediction boxes are stored.
@@robmulla When I said python script I meant implementing YOLO detection in custom script that did a triggered operation depending on the detection/labels.
if label = 'car' save frame as image with label as filename, or trigger something else to occur like turn on outside lights.
Thats a great idea. I'll have to think about how that would be implemented but is definately doable.
Important information about refunds: what a joy
Git not recognised? 😢
Thanks!
Hi Rob! Nice video
can i use this code for text detection !!!!!!
Probably not great for text but I have a video for that! Check it out here: ua-cam.com/video/oyqNdcbKhew/v-deo.html
i need your help for complet my project idea
i have 3 image in my all image i have draw a cricle and in cricle i give text = image
some where i replace shape cricle with another shape
i want to add another image on this cricle
i need help for ditecting that shape, pozition size after ditecting this all things place second image in to main image acording to ditection and save it
i cant write this code
please help me
.....
i can share my image with you if you want
please give me some idea or some ditection cod
is it possible to detect unwanted weeds among crops and then remove it through a robot? Basically, what i am asking is it possible to combine iot and image processing
Absolutely! People are doing it already. Although it would require learning a lot more than what I'm showing in this 10 minute video :D
Module not found error 'ultrlytics'
Kindly pip install ultralytics
What if I only want to detect Humans?
hi , it was really great video thank you so much for your effort , so i done it but the video is so so slow while running and i don't know why this is happening
Glad you liked the video. It may be slow if you don’t have a GPU. You can try running it in a Kaggle notebook - it might be faster.
thank you
You’re welcome!
I can used this code for text detection !!!!
error module not found torchc
Not able to follow on windows
Well , I was able to do it though
@@NaxuraHow u did can u tell me?
i cannot identify the source number of my camera
don’t use your camera unless you have to , i get issues too, switched to just detect the objects on my screen instead , still using the camera just doesn’t detect through the camera lol detects off of my screen
Can you plz upload video useing pycharm🥺
Hey Alamin. I don't use pycharm but the IDE you use shouldn't impact your ability to run this code. Good luck!
heyy. i founf your video very usefull. I would be gald if you could help me with the following error . I am facing it while running the same code. "Environment does not support cv2.imshow() or PIL Image.show()". I hope you could help me out As soon as possible.
Thanks! Are you running on a system that doesn’t have a monitor directly connected like a remote server? You might need to disable the image output and instead store the results to a file using the appropriate flags. Hope that helps.
@@robmulla it's all a laptop that i am using. The objects are being detected ,it's just that the video isn't playing.Anyways thanks for helping me out.🙂
the one frame where it detects your finger as a hotdog 5:57
,def main(opt):
Executes YOLOv5 model inference with given options, checking requirements before running the model."""
check_requirements(ROOT / "requirements.txt", exclude=("tensorboard", "thop"))
run(**vars(opt))
if __name__ == "__main__":
opt = parse_opt()
main(opt)
im getting error plse😢 help me
hey sir i am doing a project related to object detection using YOLOv5. if you don't mind can you help me on how to integrate a voice output for this object detection. as soon as possible please...!!!!!!!!!!!!!
A nice video
Thanks!
explain how it is actually done(erripuka)
If you've never heard YOLO before, it stands for "you only live once".
(Bob) Go big or go home.
lol! You are correct! I love to YOLO and BOGO every day 😅 - this YOLO is different even though it has a similar name.
Hey there, thank you for providing a step by step process of getting it done. I've managed to open it up using my webcam. However, I am unable to open up a mp4 or image. The error that I received is:
raise ReaderError(self.name, position, ord(character),
yaml.reader.ReaderError: unacceptable character #x0000: special characters are not allowed
But then again, I have searched online and can't seem to resolve this issue. Would appreciate if you have any feedback on this. Cheers.
Thanks for the feedback. I’ve never seen that one before. Could it be an issue with the file name? Try renaming it to something simple. Good luck
Webcam detection says Failed to open 5
Make sure you are using the correct web cam number for your system. Might be 0 or 1 and you change that in the command you run.
is this open source? Impressive.
Can't run a video file
It works for me. You might want to try yolov8 that was recently released.
@@robmulla ok
But how can I use my GPU for running this. My cpu is utilized for this and the GPU is completely free. How to do this ....please help
Please reply
Test it with jetson nano
Proper.
Thanks! I think?
i think the video is a little outdated, you provide the video directly through the --source argument
Oh really. I didn’t realize it changed. Check out my video with yolov7