Did you enjoy this video? Try my premium courses! 😃🙌😊 ● Hands-On Computer Vision in the Cloud: Building an AWS-based Real Time Number Plate Recognition System bit.ly/3RXrE1Y ● End-To-End Computer Vision: Build and Deploy a Video Summarization API bit.ly/3tyQX0M ● Computer Vision on Edge: Real Time Number Plate Recognition on an Edge Device bit.ly/4dYodA7 ● Machine Learning Entrepreneur: How to start your entrepreneurial journey as a freelancer and content creator bit.ly/4bFLeaC Learn to create AI-based prototypes in the Computer Vision School! www.computervision.school 😃🚀🎓
Hi. I downloaded a dataset from roboflow with three folders: train, valid, and test. 1. First question: Do I need to transfer my test images and labels to the valid folder? 2. The code of "model.train" is not working. It's probably because of the yaml file. The error says, "Dataset '/content/gdrive/My Drive/Object Detection project/data.yaml' images not found ⚠, missing path '/content/datasets/content/gdrive/My Drive/Object Detection project/valid/images' Note dataset download directory is '/content/datasets'. You can update this in '/root/.config/Ultralytics/settings.yaml' ". What should I do now?
if i want to train object detection for multiple object classes, then if i just dump all the images into the images/train and all the labels under labels/train will that do? Or do i have to have seperate folders for each and every object class?
Sir, I have followed the form of folder structure that you suggested to upload to google drive. However, the dataset that I use is a regular photo that I collected myself, so I don't have a labels folder that contains annotations files for training photos and validation photos. My question is, do I still have to create the main labels folder in which there are 2 other folders, namely the train folder and the val folder?
Love the simplicity of the way you teach, it's training right now I hope it works. One question, after training the model how can we test on Colab like giving it other images and see if they detect the object ?
Thank you! This is a script you can take as a reference on how to make inferences github.com/computervisioneng/train-yolov8-custom-dataset-step-by-step-guide/blob/master/local_env/predict_video.py 🙌
How do i feed test data into this model? Like if I want to put random images from the internet that the model has never seen before for it to identify objects in the image?
Take a look here docs.ultralytics.com/hub/inference_api/#detect-model-format. Basically, you need to do: model = YOLO(model_path) results = model(image_url)
Hi are you able to use cuda cores of the tesla t4 gpu while training. It should automatically detect a gpu if it is present...as given in the ultralytics documentation...thats not happening for me and by default a cpu is being selected which are increasing the training time...do yo have any solution for the same?
Hi, I dont have a problem when using a Google colab, but if I train in an ec2 instance with gpu I need to install the cuda drivers first and also need to install a specific version of pytorch. I explain it in my video on training yolov8 in the cloud. 🙌
Thanks for the prompt reply....I was somehow able to circumvent the problem....thanks for the info regarding ec2 instance too... that was very helpful...will check the video...@@ComputerVisionEngineer
Thanks for this tutorial. I didn't see how you normalized the images before training. Could you please explain more about image normalization for YOLOv8 training?
How can we download images from Open Image Datasets? There is no download option. And also how can we train our custom dataset without using Google Collab. I want to use pycharm ide. By the way, thank you
Hi, take a look at my other video on how to train a model with yolov8, I explain how to do it without using a Google colab. The instructions on how to download images from the open images dataset are available in my Patreon. 🙌
Permission to ask sir, how do you label data automatically?, because if the amount of data is labeled manually, for example 80 thousand, it will take a long time.
There are different strategies; one of them is to annotate only a subset of your data (1% for example) and train a model with that data, then use the model you trained to make predictions on 10% of the data, and then manually fix those annotations (it will take you less time fixing them than annotating them from scratch). Then do the same increasing the size of the curated annotated dataset incrementally until it reaches 100%. 🙌
Can you tell me or point to a resource that will explain how to train a custom model that can detect two different objects using separate images of one thing (say one set of images of cats and one set of dogs ) . Please and thank you.
Thanx for the tutorial. As far as i know on your example you have results for training and validation data. what if we want to test it on another test dataset and get the results (f1 score, recall etc. l) from it? I mean if we have test dataset as a whole and we want to get performance results on that dataset? What do we do?
If not mistaken I think you can specify a test set in the yaml config file. If that is not possible, you would need to inference the entire test directory and compute your desired metrics yourself. 😃🙌
Very good video, but I have a question how can I make it predict or detect with a video that I have in google drive and show me the result with the tags in it?
Try with this script from my github repository github.com/computervisioneng/train-yolov8-custom-dataset-step-by-step-guide/blob/master/local_env/predict_video.py Edit video_path and video_path_out to the locations in your google drive. 🙌
Permission to ask sir, I caught a problem with my training session, I tried to train the module with "epoches=300"( I was told that its the ideal value), but when I run that training session, It just reaches into epoches 189/300 and then it stopped automatically. Is there a way to fix it sir? Thank you for the really helpful tutorial.
Hi, there are some limitations with google colab, after a few hours the session is interrupted. Maybe that is what is going on. You could train the model using a cloud server, for example an EC2 instance in AWS.
@@SalwaZiada Technically, yes, you would need to check the performance of your model on a test set. In practice, I just check the accuracy by looking how it performs on the validation set, I consider this is enough for most projects.
Hi! We are creating a system that classifies tomato ripeness levels using image processing in CNN architecture with the YOLOv8 model. We are using Raspberry Pi 4 OS with 4GB RAM and we have encountered a problem - the system has 2-3 minute delay/lag in classifying the ripeness level. Would you happen to have any recommendation/suggestion sir on this problem?
computing in the cloud maybe. you can have an external server wich is going to classify the tomato. the Raspberry only have to send the image to the server. how did you solved the issue ?
Vscode you mean the ide? Take a look at my other video on object detection + tracking using yolov8, I used Pycharm in that video but it s pretty much the same.
Hello guys, there is a BIG MISTAKE IN VIDEO - this guy made incorect link to the yaml file. here is the correct linking an yaml: results = model.train(data=os.path.join(ROOT_DIR, "/content/gdrive/My Drive/Datadata/config.yaml"), epochs=20) instead of linking like he - without writing a directory to yaml file, typing only the name of yaml.
I guess that the "ROOT_DIR" is already the direcory. The yaml file should be on the same paste as the 'data' one. Like: data config.yaml So, there is no need to put the directory again.
I am a student from Sri Lanka. can you please let me know how to download the data from open images. but I don't have money. sir can you please help me 🙄
Thank you so much for the tutorial. I have trained the model and it gives satisfied results on images that I now give it. Please tell me (or please make a video about this) the process that how do i save that model and use it in a mobile application in Python that uses this model to detect the ingredients and then processes that list of those ingredients to generate further results. Also which platforms should I use for the coding and stuff. Please let me know, I have a project to complete, by the end of this week.
really helpful video! I just ran onto one problem. When I run the 5th cell, there's an error saying "NotImplementedError: A UTF-8 locale is required. Got ANSI_X3.4-1968". Please help mee
Thank you for your support! You can send me an email, but sometimes I am a little too busy to reply, I receive too many emails. If you have a question about the tutorials please post it on Discord. 😃🙌
Did you enjoy this video? Try my premium courses! 😃🙌😊
● Hands-On Computer Vision in the Cloud: Building an AWS-based Real Time Number Plate Recognition System bit.ly/3RXrE1Y
● End-To-End Computer Vision: Build and Deploy a Video Summarization API bit.ly/3tyQX0M
● Computer Vision on Edge: Real Time Number Plate Recognition on an Edge Device bit.ly/4dYodA7
● Machine Learning Entrepreneur: How to start your entrepreneurial journey as a freelancer and content creator bit.ly/4bFLeaC
Learn to create AI-based prototypes in the Computer Vision School! www.computervision.school 😃🚀🎓
having problem opening 'Step by step tutorial on how to download data' video link.
"It's very very important that the folders must be named "train" and "val"" That just saved my entire career
very very very very very very very very very importance
Just this 4 words enough to summerize the video "bery bery bery amaizing" :)
😂😂 Glad you enjoyed it. 🙌
Thankyou for giving ideas how to train in different platforms ❤
You are welcome! 😃🙌
Your explanation was excellent! It would be even better if you could demonstrate it through testing
Always the most exciting notifications 😂
totally right xD
Thank you guys 😄🙌
Hi. I downloaded a dataset from roboflow with three folders: train, valid, and test.
1. First question: Do I need to transfer my test images and labels to the valid folder?
2. The code of "model.train" is not working. It's probably because of the yaml file.
The error says, "Dataset '/content/gdrive/My Drive/Object Detection project/data.yaml' images not found ⚠, missing path '/content/datasets/content/gdrive/My Drive/Object Detection project/valid/images' Note dataset download directory is '/content/datasets'. You can update this in '/root/.config/Ultralytics/settings.yaml' ". What should I do now?
Make sure the path to data is absolute, and also try removing the white spaces in your directories.
if i want to train object detection for multiple object classes, then if i just dump all the images into the images/train and all the labels under labels/train will that do? Or do i have to have seperate folders for each and every object class?
This error keeps showing up please help me out:
cp: cannot stat '/content/runs': No such file or directory
can i know from where you open his google colab page
Sir, I have followed the form of folder structure that you suggested to upload to google drive.
However, the dataset that I use is a regular photo that I collected myself, so I don't have a labels folder that contains annotations files for training photos and validation photos.
My question is, do I still have to create the main labels folder in which there are 2 other folders, namely the train folder and the val folder?
Love the simplicity of the way you teach, it's training right now I hope it works. One question, after training the model how can we test on Colab like giving it other images and see if they detect the object ?
Thank you! This is a script you can take as a reference on how to make inferences github.com/computervisioneng/train-yolov8-custom-dataset-step-by-step-guide/blob/master/local_env/predict_video.py 🙌
How do i feed test data into this model?
Like if I want to put random images from the internet that the model has never seen before for it to identify objects in the image?
Take a look here docs.ultralytics.com/hub/inference_api/#detect-model-format.
Basically, you need to do:
model = YOLO(model_path)
results = model(image_url)
@@ComputerVisionEngineer thanks!!
hello sir can you please tell me how to use that trained model on an input image
for the LABELS folder (train and val) where i can find the folder
I installed ultralytics in google colab but it is showing No module named 'ultralytics'. what is the solution ? I am Using AMD processor and graphics.
Also it is very very very important to repeat some words for several times:)
😄🙌
Thanks you bro!!! good bless!
Please how can I build a streamlit app for the custom model I have created?
after training model, if i want to do real time detection using my webcam how to do
Hi are you able to use cuda cores of the tesla t4 gpu while training. It should automatically detect a gpu if it is present...as given in the ultralytics documentation...thats not happening for me and by default a cpu is being selected which are increasing the training time...do yo have any solution for the same?
Hi, I dont have a problem when using a Google colab, but if I train in an ec2 instance with gpu I need to install the cuda drivers first and also need to install a specific version of pytorch. I explain it in my video on training yolov8 in the cloud. 🙌
Thanks for the prompt reply....I was somehow able to circumvent the problem....thanks for the info regarding ec2 instance too... that was very helpful...will check the video...@@ComputerVisionEngineer
Thanks for this tutorial. I didn't see how you normalized the images before training. Could you please explain more about image normalization for YOLOv8 training?
How can we download images from Open Image Datasets? There is no download option. And also how can we train our custom dataset without using Google Collab. I want to use pycharm ide. By the way, thank you
Hi, take a look at my other video on how to train a model with yolov8, I explain how to do it without using a Google colab. The instructions on how to download images from the open images dataset are available in my Patreon. 🙌
can I know from where I open your google colab page or directory? NO link is given in distription
🤥
Anytime i train i get ‘filenotfounderror’ and i have followed all the steps multiple times
Hello, what file is not being found?
Thank you very much Master
Permission to ask sir, how do you label data automatically?, because if the amount of data is labeled manually, for example 80 thousand, it will take a long time.
Good question.
It takes me 20 hours to annotate 150 images with 10 objects per image. 20 different classes.
There are different strategies; one of them is to annotate only a subset of your data (1% for example) and train a model with that data, then use the model you trained to make predictions on 10% of the data, and then manually fix those annotations (it will take you less time fixing them than annotating them from scratch). Then do the same increasing the size of the curated annotated dataset incrementally until it reaches 100%. 🙌
Can you tell me or point to a resource that will explain how to train a custom model that can detect two different objects using separate images of one thing (say one set of images of cats and one set of dogs ) . Please and thank you.
Thanx for the tutorial. As far as i know on your example you have results for training and validation data. what if we want to test it on another test dataset and get the results (f1 score, recall etc. l) from it? I mean if we have test dataset as a whole and we want to get performance results on that dataset? What do we do?
If not mistaken I think you can specify a test set in the yaml config file. If that is not possible, you would need to inference the entire test directory and compute your desired metrics yourself. 😃🙌
Can this training be done on CPU or I need GPU?
Not how sure important are to use the labels like data and images?
very nice
Very good video, but I have a question how can I make it predict or detect with a video that I have in google drive and show me the result with the tags in it?
Try with this script from my github repository github.com/computervisioneng/train-yolov8-custom-dataset-step-by-step-guide/blob/master/local_env/predict_video.py
Edit video_path and video_path_out to the locations in your google drive. 🙌
Permission to ask sir, I caught a problem with my training session, I tried to train the module with "epoches=300"( I was told that its the ideal value), but when I run that training session, It just reaches into epoches 189/300 and then it stopped automatically. Is there a way to fix it sir? Thank you for the really helpful tutorial.
Hi, there are some limitations with google colab, after a few hours the session is interrupted. Maybe that is what is going on. You could train the model using a cloud server, for example an EC2 instance in AWS.
@@ComputerVisionEngineer thanks a lot sir. Bless you.
Hi! Many thanks for the tutorial.. How about the testing? Is a testing having a different image and label as well as training and validation?
You are welcome! 🙌 Not sure if I understand, do you mean if its possible to specify a test set, besides the train and validation sets?
@@ComputerVisionEngineer yes
@@ComputerVisionEngineer actually do we need to specify a test set to check the accuracy of the model?
@@SalwaZiada Technically, yes, you would need to check the performance of your model on a test set. In practice, I just check the accuracy by looking how it performs on the validation set, I consider this is enough for most projects.
Thank you ❤️❤️
Permission to ask did val images is same like train images?
I use different sets for training and validation 🙌
Thanks for this tutorial please do a video on how to move your model from colab to pycharm and still get it working
Ok I will try to do a video about that. 🙌
Hi! We are creating a system that classifies tomato ripeness levels using image processing in CNN architecture with the YOLOv8 model. We are using Raspberry Pi 4 OS with 4GB RAM and we have encountered a problem - the system has 2-3 minute delay/lag in classifying the ripeness level. Would you happen to have any recommendation/suggestion sir on this problem?
computing in the cloud maybe. you can have an external server wich is going to classify the tomato. the Raspberry only have to send the image to the server.
how did you solved the issue ?
Train Yolov8 custom dataset ❌ very very very very importance✅
😄🙌
Hi sir, I'm a beginner here.
Could you please tell me how to get the yaml file that you mentioned in the video. Thanks
I did it just creating one on vscode, then uploading on the same paste as the 'data' one is it.
How to export this model into Vscode
Vscode you mean the ide? Take a look at my other video on object detection + tracking using yolov8, I used Pycharm in that video but it s pretty much the same.
Thank you
You are welcome! 🙌
Please do a video on how to train a Yolo-NAS object detectoion on a custom dataset. Thank you
I will try to.
Thank you for helping other students and engineers
thank you so muchhhhhhhhhhhhhh.
You are welcome!! 😃🙌
I dont know what you took before the video but I wanted too.
This guy is trained on a "very very very" dataset. 😂
😂😂
Hello guys, there is a BIG MISTAKE IN VIDEO - this guy made incorect link to the yaml file. here is the correct linking an yaml:
results = model.train(data=os.path.join(ROOT_DIR, "/content/gdrive/My Drive/Datadata/config.yaml"), epochs=20)
instead of linking like he - without writing a directory to yaml file, typing only the name of yaml.
I guess that the "ROOT_DIR" is already the direcory.
The yaml file should be on the same paste as the 'data' one.
Like:
data
config.yaml
So, there is no need to put the directory again.
I am a student from Sri Lanka. can you please let me know how to download the data from open images. but I don't have money. sir can you please help me
🙄
Thank you so much for the tutorial. I have trained the model and it gives satisfied results on images that I now give it. Please tell me (or please make a video about this) the process that how do i save that model and use it in a mobile application in Python that uses this model to detect the ingredients and then processes that list of those ingredients to generate further results. Also which platforms should I use for the coding and stuff. Please let me know, I have a project to complete, by the end of this week.
really helpful video! I just ran onto one problem. When I run the 5th cell, there's an error saying "NotImplementedError: A UTF-8 locale is required. Got ANSI_X3.4-1968". Please help mee
Try restarting the runtime and running the cell again.
you are beautiful
thank you for every thing you do with this channel it really awesome, i only have one thing i want to ask you it is ok to send you an email
Thank you for your support! You can send me an email, but sometimes I am a little too busy to reply, I receive too many emails. If you have a question about the tutorials please post it on Discord. 😃🙌
how to make it detect video?
Try with this code: github.com/computervisioneng/train-yolov8-custom-dataset-step-by-step-guide/blob/master/local_env/predict_video.py 🙌