Thank you. Thank you. Thank you Mr. Coder Zero. I have spent a year and a half trying to figure out this stupid ass YOLO object detection and finally, courtesy of your video, I have now managed to figure it out. Hot dog! I am now a happy camper. I can now expire in peace!
For those who had the issue of not being able to save the files to "runs/detect/predict" after finish generating at the inferencing section (regardless its for images or videos), try adding "save=True" at the end of the prompt.
Thank you for the tutorial. I'm having trouble on seeing the results on a video. I followed the exact code from your tutorial, and uploaded a test mp4 file. When I infer on the video, the program shows that the video is actually being tested on frame by frame, but when it finishes, it doesnt show up on my drive edit: leaving my comment up in case anyone has the same problem. The issue was that it takes a while to upload a video to the drive. I used a 30 second video but it took about 10 minutes. Its doesn't upload as quick as an image
I have a huge dataset and after running it 10-15 epochs I always get an error message saying an image was not found but in reality it was always there in the dataset. Can you help? In my local environment I never had that error message, but I tried Google Colab because it has more resources.
I have one doubt..we are passing image paths of training and validation in yaml file.what about labels?we are not passing anything related to labels.Then how does the model know
thank you for the tutorial. May i please know why i get this error (FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/MyDrive/yolov8/data/download_images.py.jpg') when i run the codes to split the dataset?
I had the same error, turns out that if you upload files to drive you need to unselect automatic conversion (Settings > Uploads > Untick convert uploads) so that the format is right for the script. (If it's wrong, you might be getting, for example, ..jpg instead of .jpg. easy to miss)
I would like to ask what if i would like to detect some other objects like resistor, inductor and capacitor which is the electrical component how would i do that?
Hi, steps are : data creation, model training. You have to collect images of all required items in various conditions, then annotate in yolo format to create data. Then you can follow this tutorial to train and test the model
Thank you for the video, sir. However, what I want from you is this: In the last part, I want to open a webcam instead of a video and test whether it recognizes it that way. What can I write?
Thank you for sharing. I want to ask something. I have trained data on colab. But the color of the bounding box between class 1 and the others is almost the same. how to change bounding box color?
Hi, it depends on your dataset and project. Also, don't focus on number of images. Focus on the distribution of data. Example: if you take 20000 images of cars in day time and train the model. It will struggle in night time or rainy days or during foggy weather.
hi, I have a question. if I divide the dataset into training and testing only, is it necessary to run the validation part? and if not, during inference how to find out the mAP?
@@coder_zero If I split the data into train, validation, and test sets, is the mAP calculated based on the validation part, or do we need to create a custom function to determine the mAP from the test data?
Hi, please make sure you annotate all images. Splitting of data should be done carefully. And ensure you give the correct path to the model during training.
Hi, it's a simple function. It creates a list of all image dirs and split into 80 : 20. Then copies the images and their label files into specific folders. You can create your own custom function for this train-val split.
Please ensure that you have created data folder structure as we have discussed in the tutorial. Then you have put the correct directory in the dataset.yaml
@@coder_zero ay yes it's done, thanks is there any way for this model to be fine-tuned? if yes, then what kind? and what exactly are the parameters within the model?
hello, Can we add the object trained with the custom dataset to the other 80 object YOLO weights? As a single weight of 80+1. Can we increase the weight of the existing 80 objects? thanks. normally yolo weight consists of 80 objects. Can we add new objects to objects of this weight by training with custom datasets?
Hey, Your video is very good and easy to understand. Could you make an tutorial how to install YOLO V8 for amd in windows 11? There is a tutorial in amd youtube channel about this but it is too hard to understand. Looking for your quick response.
Hi, You can take videos and images from internet. I have done that only. The main process of this video is to show the training process. You can follow the tutorial with your own dataset. 😀
olá, muito bom o contiúdo, baixei para aprender. vc poderia me passar o link para fazer o download dos arquivos das pastas output, video, vid2, training result e a test imag, não consigo encontrar. agradeceria
Where is the google colab code? I'm new to YOLO. I have downloaded the dataset but I can't follow the tutorial. Can you please share the Google colab code so I can follow the tutorial. Thank you.
hi, you can use yolo export model=yolov8n.pt format=pb for details about supported models, have a look at github.com/ultralytics/ultralytics/blob/16639b60ebc63111d0283edf9cf37f4b5ce479b9/ultralytics/engine/exporter.py
Thanks a lot man ....❤
The best video on yolov8 through Google colab🙌🏻
😊😊😊
Thank you. Thank you. Thank you Mr. Coder Zero. I have spent a year and a half trying to figure out this stupid ass YOLO object detection and finally, courtesy of your video, I have now managed to figure it out. Hot dog! I am now a happy camper. I can now expire in peace!
Happy to help 😀
Thank you so much literally searched so many videos, finally found your video, it was a really great help, thank you😄
😀
i tried many tutorial but this tutorial is best and briefly explain each terms thank you very much
Thank you :)
Thank youuuuu thank youuuu very much Mr... You helped me a Lot !!! May God Bless you
Glad to hear that😀
For those who had the issue of not being able to save the files to "runs/detect/predict" after finish generating at the inferencing section (regardless its for images or videos), try adding "save=True" at the end of the prompt.
Thanks a lot dear
Yes 😃
thank you so muuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuch
bro that was very helpful. It worked
Thanks man
Very good vedio, everything is in detail. Thankyou for that. Keep helping
😃😃😃
thank you so much for being so clear and concise
Glad it was helpful! 🤠
Thank you so much, sir, for the excellent explanation. It really helped me in my project. Again thank you, sir.
😃
Thank you for the tutorial. I'm having trouble on seeing the results on a video. I followed the exact code from your tutorial, and uploaded a test mp4 file. When I infer on the video, the program shows that the video is actually being tested on frame by frame, but when it finishes, it doesnt show up on my drive
edit:
leaving my comment up in case anyone has the same problem. The issue was that it takes a while to upload a video to the drive. I used a 30 second video but it took about 10 minutes. Its doesn't upload as quick as an image
Use save= True to save video. Find this output path. Then you can use Linux copy command cp to copy this output video to your drive.
I have a huge dataset and after running it 10-15 epochs I always get an error message saying an image was not found but in reality it was always there in the dataset.
Can you help?
In my local environment I never had that error message, but I tried Google Colab because it has more resources.
Thank sir for the explanation
Thanks 😄
I have one doubt..we are passing image paths of training and validation in yaml file.what about labels?we are not passing anything related to labels.Then how does the model know
In the algorithm, it will replace 'images' with ''labels' and take care of labels 🙂
@@coder_zero okay thanks a lot for this tutorial
one doubt i had the video after testing is getting saved in form of frames and not mp4 what to do
thank you for the tutorial. May i please know why i get this error (FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/MyDrive/yolov8/data/download_images.py.jpg') when i run the codes to split the dataset?
Getting the same error
Im also getting the same error
Have you got that
inside data.yaml file, try these:
path: C:\**yourprojectfoldername**\dataset # dataset root dir
train: images\train # train images (relative to 'path')
val: images\validation # val images (relative to 'path')
I had the same error, turns out that if you upload files to drive you need to unselect automatic conversion (Settings > Uploads > Untick convert uploads) so that the format is right for the script. (If it's wrong, you might be getting, for example, ..jpg instead of .jpg. easy to miss)
can you tell me where didi you downloaded the 8 videos in the drive
Thanks tons sir!
Most welcome!
bravo thx for the tutorial sir
😄
Where I will gets the videos???? They are not in kaggle dataset
I would like to ask what if i would like to detect some other objects like resistor, inductor and capacitor which is the electrical component how would i do that?
Hi, steps are : data creation, model training. You have to collect images of all required items in various conditions, then annotate in yolo format to create data. Then you can follow this tutorial to train and test the model
Thank you for the video, sir. However, what I want from you is this: In the last part, I want to open a webcam instead of a video and test whether it recognizes it that way. What can I write?
Hi, you can follow this tutorial ua-cam.com/video/O9Jbdy5xOow/v-deo.htmlsi=jgSZRk7ZjyuCt9o8
if I have only one class to detect, I have to make changes in the dataset.yaml file right?
Yes
@@coder_zerothanks!! and hey if you could also help with one more thing please, how can I train this model with videos instead of images?
@@lightning00769 have a look at this github.com/ultralytics/ultralytics/issues/7206
Nice tutorial man, just one question, how to get the locations (coordinates) of the boxes that are drawn in test images
Hi, have a look github.com/ultralytics/ultralytics/issues/7719
This looks promising.
Thank you very much for your video.
You are welcome😀
Thank you so much my bro my love
😄😄😄
Did you Manually kept some images in test or by division of code
Yes, I did. It's a best practice to keep some images for testing. Make sure that model has not seen these during training and validation.
Thank you for sharing.
I want to ask something. I have trained data on colab. But the color of the bounding box between class 1 and the others is almost the same. how to change bounding box color?
We can use the python version of inference codem it will give the bboxes. Then we can use cv2 for custom color 😃
how many epochs and images is better to use for good object detection
Hi, it depends on your dataset and project. Also, don't focus on number of images. Focus on the distribution of data. Example: if you take 20000 images of cars in day time and train the model. It will struggle in night time or rainy days or during foggy weather.
hi, I have a question. if I divide the dataset into training and testing only, is it necessary to run the validation part? and if not, during inference how to find out the mAP?
You can create a custom function for this.
@@coder_zero If I split the data into train, validation, and test sets, is the mAP calculated based on the validation part, or do we need to create a custom function to determine the mAP from the test data?
Great work !
Thank you😊
thanks a lot!
Welcome 🙂
is it advisable to use jupyter nb in vscode than google colab to train more than 10k images? suppose i have a ryzen 7 laptop
Hi you can use jupyter nb. Change the directories accordingly and then set the batch size to 8
hi, nice video, i have a trouble the predict model runs well but doesnt create /runs/detect/predict folder in yolo 8.0.46 maybe you know why?
Here the same issue...
add "save=True" in the prompt for predictions
Yes, Try this solution.
@@coder_zerotank you everyone, after read the documentation i found this argument introduced since 8.0.23
Thank you so much :)
You're welcome!😀
My dataset is consisting 500 images but it only training 51 images why? what would be the problem?
Are the 500 images anotatted?
Hi, please make sure you annotate all images. Splitting of data should be done carefully. And ensure you give the correct path to the model during training.
greattt
Thank you 😊
please i'm having a problem with the spliting, how dont see where the code reads the dataset folder
i couldnt get the automation of spliting
Hi, it's a simple function. It creates a list of all image dirs and split into 80 : 20. Then copies the images and their label files into specific folders. You can create your own custom function for this train-val split.
For some reason, when I tried to use the .yaml, it says that dataset could not be found, any thoughts on why that happened?
Please ensure that you have created data folder structure as we have discussed in the tutorial. Then you have put the correct directory in the dataset.yaml
@@coder_zero ay yes it's done, thanks
is there any way for this model to be fine-tuned? if yes, then what kind? and what exactly are the parameters within the model?
thanks man
Happy to help😄
hello,
Can we add the object trained with the custom dataset to the other 80 object YOLO weights? As a single weight of 80+1. Can we increase the weight of the existing 80 objects?
thanks.
normally yolo weight consists of 80 objects.
Can we add new objects to objects of this weight by training with custom datasets?
Hi, I have never tried this. But, theoretically you can try creating new weights by taking average of your pre- trained weights.
Please make a video Yolov8 for semantic segmentation; I'm working on it but getting errors.
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 107 but got size 0 for tensor number 1 in the list.
hi, working on it. Will release soon :)
@@coder_zero thanks
@@ArifHussain-fs5jz hey, i am getting the same error please let me know if you solved it.
Hey, Your video is very good and easy to understand. Could you make an tutorial how to install YOLO V8 for amd in windows 11? There is a tutorial in amd youtube channel about this but it is too hard to understand. Looking for your quick response.
Thank you. I am sorry 😔 I can't help you with this as I have a nvidia system.
can we do for this real time pc web cam and identify?
Yes, we can.
Hi Deepak I couldn't find any video for inferencing on kaggle . Where did you get that ?
Hi, its a good practice to follow the notebook but work with your own data according to your own project.
Where can we find the test_images dataset? same with the videos
Hi, You can take videos and images from internet. I have done that only. The main process of this video is to show the training process. You can follow the tutorial with your own dataset. 😀
@@coder_zero it is working ty
@@coder_zero Hello! Is there other way to not purchase GPU in the colab?
@@coder_zero Hello, can i implement this in the GUI? And which one
olá, muito bom o contiúdo, baixei para aprender. vc poderia me passar o link para fazer o download dos arquivos das pastas output, video, vid2, training result e a test imag, não consigo encontrar. agradeceria
Sorry I couldn't share output videos.
Great video and explanation. How do we get the test images? are they part of the content in data folder?
Hi, you can keep some images from the data set for testing purpose.
Thanks for the video sir, but what about datasets that does not have any annotated box?
Hi, you can create your own dataset. Have a look at this video ua-cam.com/video/v-HIYfOqQeU/v-deo.html
Thank you so much
You're most welcome😄
Where is the google colab code? I'm new to YOLO. I have downloaded the dataset but I can't follow the tutorial. Can you please share the Google colab code so I can follow the tutorial.
Thank you.
Hi, you can find the notebook at github.com/deepakat002/yolov8
what a great explanation. I'm wondering how can i generate the labels from images into those format? again thank you sir
Hi, you can use labelImg for annotation
@@coder_zero
Is there a labeling software not restricted to boxes ?
Is the GPU instant provided by google colab free to use?
Hi, yeah, GPU instance is free upto some usage limit. After that you can wait for 24 hr to get the free version again.
why my results are not getting saved after executing inferencing step???
hey help bro I'm stuck
what about the accuracy of the model?
In detection model we focus on mAP value. This gives a good sense on model performance.
after inferencing code no run directory created ????
help bro
this was the easiest video on yt and im stuck
@@Anutosh13 add another argument save=True in the inferencing cell
@@chiragubnare2944 yupp did that, thank you
Sir from where we find these 8 video
Hi, you don't need these videos. You can follow the tutorial with your own dataset 😃
HOW TO CONVERT MODEL TO TENSORFLOW MODEL?
hi, you can use yolo export model=yolov8n.pt format=pb
for details about supported models, have a look at github.com/ultralytics/ultralytics/blob/16639b60ebc63111d0283edf9cf37f4b5ce479b9/ultralytics/engine/exporter.py
Thank you so much, but you did not talk about the annotation.
Hi, you can use labelImg for annotations. I will be creating a short video on annotations soon.
please train paddle ocr on colab
can provide drive link of video dataset
Sorry, I don't have that anymore. Please follow this tutorial with your own dataset. In this way it will be very helpful 😃