Track & Count Objects using YOLOv8 ByteTrack & Supervision

Поділитися
Вставка
  • Опубліковано 31 січ 2025

КОМЕНТАРІ • 374

  • @omoyeyeemmanuel4238
    @omoyeyeemmanuel4238 10 місяців тому +12

    your code comes with the error of numpy between float, int and double

    • @Roboflow
      @Roboflow  10 місяців тому +8

      I recommend using updated version of the notebook: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb

    • @surplusTv1
      @surplusTv1 10 місяців тому

      thanks
      @@Roboflow

    • @mariacaleropereira2967
      @mariacaleropereira2967 9 місяців тому +1

      @@Roboflow Does that notebook solve the error? Thank you!

    • @sportsyard
      @sportsyard 9 місяців тому +3

      Is there any other updated notebook because this one is also throwing some error

    • @Nasarae
      @Nasarae 8 місяців тому

      ​@@mariacaleropereira2967 The notebook works perfectly :)

  • @ditya3548
    @ditya3548 Рік тому +8

    26 minutes for this is not long at all. Thank you for what you do and please don't hesitate to make longer videos, however you see fit.

    • @Roboflow
      @Roboflow  Рік тому +1

      My pleasure!

    • @katanshin
      @katanshin Рік тому

      It goes to show how streamlined this stuff has become. Try doing a PhD in this ten years ago and having to write your own code for everything AND the novel parts you're working on. Takes months and hours to explain. Now anyone can git clone and run complex models. What a world :)

    • @ditya3548
      @ditya3548 Рік тому

      @@katanshin Truly!

  • @AsadAli-b9u
    @AsadAli-b9u Рік тому +10

    The best and complete tutorial for implementing YOLOV8 based object detection, tracking and counting system. Love it brother

    • @Roboflow
      @Roboflow  Рік тому

      That’s what I strived for! Great to hear you liked it so much 🔥

    • @ashishreddy2634
      @ashishreddy2634 Рік тому

      How can I count the bounding boxes for a set of images ( not a video) in this case ( using a pre trained yolov8 model with only 1 class)

    • @AsadAli-b9u
      @AsadAli-b9u 8 місяців тому

      ​@@ashishreddy2634 Are you trying to detect specific class?

  • @bramantyowikantyoso1
    @bramantyowikantyoso1 Рік тому

    Thank you so much.. I have Zero experience on this matter but following each of your instruction and I did finish my project with my own video.. Super!

  • @CurrentFactNews1
    @CurrentFactNews1 Рік тому +1

    You deserve more subscribers and likes ! Cool guy and straightforward 💛

    • @Roboflow
      @Roboflow  Рік тому

      I hope we will get 50k subs this year! 🤞🏻

    • @CurrentFactNews1
      @CurrentFactNews1 Рік тому

      @@Roboflow Guys show your love for this dedicated Gentleman by subscribing and liking his content.

  • @MrCantyousea
    @MrCantyousea 2 роки тому +12

    Always best contexts with very clear explanations... You are perfect bro !

  • @aaron1uk
    @aaron1uk Рік тому +2

    Fantastic tutorial, playing around with plenty of the options here, thanks for the upload.

    • @SkalskiP
      @SkalskiP Рік тому

      Hi it is Peter from the video 👋Thanks a lot! Let us know what other feature could be useful ;)

  • @akhileshsharma5067
    @akhileshsharma5067 11 місяців тому +5

    Hello Piotr @roboflow, thank you for the video. I have trained my model on 3 different classes.Would it be possible to have the line zone annotator display the count of each class separately rather than the sum of detections of all classes? Can you please help with this?

  • @kozaTG
    @kozaTG 11 місяців тому +1

    nice and simple explanation. i am a beginner and i am trying to start with something simpler like object detection and counting i a picture how would i go about this?

    • @Roboflow
      @Roboflow  11 місяців тому

      I think this video will be much more useful for you: ua-cam.com/video/l_kf9CfZ_8M/v-deo.html

  • @ogunserifonargan191
    @ogunserifonargan191 Рік тому

    TY for your great work on supervision library. I have modified your line counting algorithm. During counting people from indoor cctv camera, lines stay short to meet counting conditions. Firstly, I tried center dot instead of corners of bb, but it become unstable, especially when a person pass from door, because center of rectangle become unstable while object slowly disappear. Finally, I draw a square at center of object. It fits my case and generate stable countings.

  • @Jordufi
    @Jordufi Рік тому +3

    I really need help for one thing.
    How can you show the specific number of cars and trucks that have gone in and out.
    For example:
    3 cars and 1 truck in and
    5 cars and 1 truck out

    • @Roboflow
      @Roboflow  Рік тому +2

      We don't have a dedicated feature yet, but you can build a workaround solution. Create two separate line counters. Filter detections by class, to get car and truck detections and trigger one line counter with car detections and the other with truck detections.

    • @Jordufi
      @Jordufi Рік тому +1

      @@Roboflow I will try that, thank you very much!!!

  • @Teleportcamera
    @Teleportcamera Рік тому +2

    Thank you for the amazing video! Is it possible to invoke yolo8 on every 4th frame (for example), instead of every single frame? And have some kind of other system follow the object in the other 3 frames (to save on resources).

    • @Roboflow
      @Roboflow  Рік тому +1

      Not to my knowledge. You skip the frame completely or not. All of those trackers depend on boxes being generated by the model. That being said you can try to pass detections to tracker every 4th frame. It all depends on input video but could still work.

  • @hamzaedits5577
    @hamzaedits5577 3 місяці тому +2

    Bro you deserve OSKAR .

  • @muhammadumarsotvoldiev8768
    @muhammadumarsotvoldiev8768 Рік тому +1

    Thank you brothers, for your work!

  • @djaadiabdellah9081
    @djaadiabdellah9081 Рік тому

    Just wow!
    Thank you for this great content.

  • @cristianespana4253
    @cristianespana4253 11 місяців тому +5

    Hola tengo el error en la parte del código : tracks = byte_tracker.update( output_results=detections2boxes(detections=detections), img_info=frame.shape, img_size=frame.shape ) ;sale este error: AttributeError: module 'numpy' has no attribute 'float' ;pueden ayudar porfavor

    • @manuelnavarrete4509
      @manuelnavarrete4509 11 місяців тому

      Tengo el mismo error, pudiste solucionarlo?

    • @cristianespana4253
      @cristianespana4253 10 місяців тому

      @@manuelnavarrete4509 si ..antes de ejecutar el código agrega esta línea : !pip install -U numpy==1.23.5 ;después te pedirá reiniciar la sesión ,vuelves a ejecutar el código ya sin volver a instalar el numpy y listo

    • @kukilp213
      @kukilp213 2 місяці тому

      np.float has been deprecated. One way to fix is to modify it to np.float64. Make changes in the files yolox/tracker/matching.py and bytetracker.py in the same directory.

  • @sowmiyar6505
    @sowmiyar6505 Рік тому +1

    Thankyou so much. The explanation was in-depth.

    • @Roboflow
      @Roboflow  Рік тому +2

      My pleasure!

    • @sowmiyar6505
      @sowmiyar6505 Рік тому

      @@Roboflow by adjusting some resolution and having perfect line counter position,your code is doing great in real-time. 👍

  • @nithinhs7231
    @nithinhs7231 2 роки тому +3

    Classical... What a topic.. thanks..

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      No. Thank you for watching! ;)

    • @shukkkursabzaliev1730
      @shukkkursabzaliev1730 2 роки тому +1

      @@SkalskiP As always amazing job! One problem I am facing inside match_detections_with_tracks function, when the object is not in frame and model return emtpy list this line gives error iou = box_iou_batch(tracks_boxes, detection_boxes)
      How can I solve it?

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      @@shukkkursabzaliev1730 oh, that code is far from being bullet proof. Would you like me to update notebook to work for those use-cases?

  • @ibal6875
    @ibal6875 2 роки тому +3

    How about showing an example of how we can measure dimensions of objects ? Probably needs to use a reference object of known dimensions ?

    • @SkalskiP
      @SkalskiP 2 роки тому

      Hi! This is Piotr from the video. This is something that is on my mind for a long time. And yes, having some reference object at least to calibrate measurements would be mandatory.

  • @s4ifbn
    @s4ifbn Рік тому +1

    thank you, nicley done, I was wondering if we use the segmentation model, how can we annotate the segments with supervision ?

    • @Roboflow
      @Roboflow  Рік тому +2

      Great question. We have support for segmentation on our road map, but it will take us a bit more time to put it on production.

  • @benyaminasgari7503
    @benyaminasgari7503 2 місяці тому +2

    thank you for the video,
    however i ran into error when running the code for bytetrack regarding 'loguro' and no matter what i cant solve it

    • @Roboflow
      @Roboflow  2 місяці тому

      Did you used latest version of our notebook? colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb

  • @SkiLLFace360
    @SkiLLFace360 2 роки тому +2

    I really enjoyed the last episodes, very well and comprehensibly explained! Thanks!
    Would it be possible to make a video about rotated object detection in YOLOv8? Would be very useful.

    • @SkalskiP
      @SkalskiP 2 роки тому

      Hi, it is Peter from video! Thanks from kind words. It means a lot to me. Is YOLOv8 capable of rotated object detection?

    • @SkiLLFace360
      @SkiLLFace360 2 роки тому

      @@SkalskiP Hm you are probably right, rotated detection doesn't exist yet.
      Thought I just overlooked it..
      Thanks for the answer!

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      @@SkiLLFace360 no worries it is kind of my job to know it ;)

  • @vcarvewood4545
    @vcarvewood4545 2 роки тому +1

    Piotr jest super-duper ultra yolo guru :D

    • @SkalskiP
      @SkalskiP 2 роки тому

      It's Peter from video. I'm not sure if I'm YOLO guru, but thanks a lot for this kind comment. I went through a bit of internet hate lately, so it is great to here some positive feedback.

  • @leenaltwayan4004
    @leenaltwayan4004 Рік тому +5

    Great video! However I tried implementing it with more than one counter (one for each lane) but it seems that LineCounter is a global variable shared across all other lanes. is there a way to overcome this?
    Thank you!

  • @muhammadsaqib453
    @muhammadsaqib453 14 днів тому

    Very impressive and recommend video.

  • @hankling8963
    @hankling8963 Рік тому +1

    Nice job ! love from china❤

    • @Roboflow
      @Roboflow  Рік тому

      Hi, it is Peter from the video! Thanks a looot! Love from Poland.

  • @DanielParker047
    @DanielParker047 Рік тому +2

    Brother, I watched your Object detection for a custom dataset video, it's awesome. I trained with my own dataset and it works like magic. Now, if I want to calculate the time , an object appears in a video, how can I do that? Then, is it possible to do the same for different objects and plot them as graph with Time in y-axis and the type of object in x-axis?

    • @Roboflow
      @Roboflow  Рік тому +1

      Hi. Thanks a lot. We are actually thinking about making video like that. I hope we will be able to record it soon.

    • @DanielParker047
      @DanielParker047 Рік тому

      @@Roboflow 😍 Thanks brother! Waiting for that video... ⏳

  • @matteocarlone6503
    @matteocarlone6503 2 роки тому +4

    Could you please do a tutorial about using yolo v8 real time on a webcam, even the pc webcam

    • @Roboflow
      @Roboflow  2 роки тому

      Hi! Could you please add that idea here: github.com/roboflow/notebooks/discussions/categories/video-ideas?

    • @neeraj.kumar.1
      @neeraj.kumar.1 2 роки тому +3

      YOLOv8 detection + tracking + counting on webcam?

    • @SkalskiP
      @SkalskiP 2 роки тому

      @@neeraj.kumar.1 hi I'll think about it. Next video comming soon :)

  • @FatemehZaremehrjardi
    @FatemehZaremehrjardi Рік тому +1

    Thank you so much for the video. what's the difference between this notebook and using "yolo track model=path/to/best.pt tracker="bytetrack.yaml"" ?

    • @Roboflow
      @Roboflow  Рік тому +2

      Hi! That video was actually recorded before YOLOv8 team added tracking capability. But in short, you can use ByteTrack with any object detection model, and if you will use Ultralytics implementation then you are bound to use only YOLOv8.

  • @777sukumar
    @777sukumar Рік тому +1

    Thank you for the video. It's really helpful. Is there any way to detect time stamp in the video to capture at what time Vehicle crosses the count line. It will be a great help.

    • @Roboflow
      @Roboflow  Рік тому

      Thanks a lot. Is that static file or stream?

    • @777sukumar
      @777sukumar Рік тому

      @@Roboflow Thank you for your reply. Stream. Recorded footage of traffic with timestamp in it when it is recorded. It's similar to the Video used in your explanation.

  • @neeraj.kumar.1
    @neeraj.kumar.1 2 роки тому +3

    Bro I'm getting problem whenever I'm installation supervision in g-drive
    Please let me know how to solve this problem

  • @leonyap27
    @leonyap27 5 місяців тому +1

    @Roboflow may I know why I can't download or play the video? manage to full the code without error sv version 0.18.0

    • @Roboflow
      @Roboflow  5 місяців тому

      Hi you mean you can’t download video form Colab? Could you be a bit more specific?

  • @nhanduong5917
    @nhanduong5917 Рік тому +1

    from 7:49, the notebook from the link in the description doesn't have those lines, so where can I copy them to paste? Thank you!

    • @Roboflow
      @Roboflow  Рік тому

      I just checked. The line definition is there.

    • @nhanduong5917
      @nhanduong5917 Рік тому

      @@Roboflow I'm sorry but I don't understand! Could you please reply me with the link?

  • @joaopedrosantosmatos9177
    @joaopedrosantosmatos9177 Рік тому

    Awesome awesome awesome! Thank you for the excellent work

  • @atomix_2402
    @atomix_2402 11 місяців тому +1

    Hello Piotr @roboflow , I'm so very thankful for this insightful video i just wanted to know how do you consider the coordinates for the custom dataset like is there a method or just intiution

    • @Roboflow
      @Roboflow  11 місяців тому

      Not really sure what you mean. Could you elaborate on your question?

    • @atomix_2402
      @atomix_2402 11 місяців тому

      @@Roboflow What I meant is you draw out polygons for the polygon zone or line zone. How do you do that like the exact numbers in the numpy array.. You also showed a project for candy counting and tracking on conveyor belt. I couldnt find your video so i found similar in youtube made a dataset trained it but after that i couldn't make coordinates for the "line" based on which if the candy crosses the line its in and count increases.. So basically to sum it up How do one calculate the numpy array for the polygon zone?

  • @madeshprasadc2551
    @madeshprasadc2551 2 роки тому +1

    Thank you for this tutorial, it helps us a lot

    • @Roboflow
      @Roboflow  2 роки тому +1

      This is great to hear! 💜 I was hoping for such a positive feedback

  • @aerogrampur
    @aerogrampur 8 місяців тому

    appreciate the elaborate explanation. Can we tag each of those objects with unique id? like car1, car2 ...etc

  • @lofihavensongs
    @lofihavensongs 2 роки тому +2

    Hey there, thanks for the amazing YOLO 8 videos, I run the code for object detection and it was work fine. then I tried to run for instance segmentation. all steps are fine but in the final step when I run the code for Inference with Custom Model, code run without any issue but this message did not appear: Results saved to runs/segment/predict2. do you know what is the problem?

    • @Roboflow
      @Roboflow  2 роки тому

      Could you create issue here: github.com/roboflow/notebooks/issues ?

    • @lofihavensongs
      @lofihavensongs 2 роки тому +1

      @@Roboflow Hi I found the error , in the code should write : save=true but you forgot it I guess . Thanks

    • @Roboflow
      @Roboflow  2 роки тому +1

      @@lofihavensongs thanks a lot! Let me try to update that

  • @vishnum7985
    @vishnum7985 2 роки тому +2

    Thanks. Can you tell me which tracking algorithm works better - ByteTrack or DeepSort

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      Hi it's Peter from the video. I like ByteTrack a lot more.

  • @gnavarrolema
    @gnavarrolema Рік тому +1

    Great video tutorial. Thank you!!!

  • @NetoFreitass
    @NetoFreitass Рік тому +1

    Great video! How do I customize the counter? For example, position it in the corner of the screen, count cars, trucks, and motorcycles with their own counters? Thank you!

  • @m.shokarim9617
    @m.shokarim9617 Рік тому +2

    Thanks for the video, it has been quite useful! I want to export the Tracking data as a CSV file. Specifically, I want to run the MOT evaluation toolset in order to evaluate my own dataset. Thus, I was wondering how I could correctly export each objects detection, its bounding boxes, confidence and so on for each frame. Any help would be greatly appreciated :))

    • @Roboflow
      @Roboflow  Рік тому +1

      We will actually release a new video this week. It will be about detections time analysts. But in this video we will show you how to save detections as csv. Stay tuned.

    • @m.shokarim9617
      @m.shokarim9617 Рік тому

      Thank you very much) You guys are really being helpful with your videos.@@Roboflow

    • @m.shokarim9617
      @m.shokarim9617 Рік тому

      Any news on the new video so far? I am really struggling to make sense of analyzing the ByteTrack on the MOT toolset. The codebase that ByteTrack provides is just so faulty and has zero guidance@@Roboflow

  • @danushkabandara
    @danushkabandara 2 роки тому +2

    thanks for the video. I noticed that even with a clear view of all the vehicles, you still lose track of the truck and it gets a new id. Is there a way to limit the number of ids that the objects get so that this doesnt happen? for example you only have 4 possible labels during the video and the algorithm has to select the most likely label when tracking.

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      Is is possible to solve those issues. Or to at least make them less frequent. But potential solutions are usually strictly tied to use-case that you are trying to solve. In our case you can notice that those id changes are happening only when cars are still far away or when they are partially ocluded by this large metal object hanging over the left lane. Thats why I would propose to discard objects that are in top half of image and only take into account those that are in bottom half - closer.

  • @MrT12359
    @MrT12359 Рік тому +1

    Excellent video 👌

    • @Roboflow
      @Roboflow  Рік тому +1

      Thanks a lot! 🙏🏻 make sure to try Supervision!

  • @heetshah5718
    @heetshah5718 Рік тому +1

    Great work helps in understanding topics better. I have one question can I use the same code for image dataset.

    • @Roboflow
      @Roboflow  Рік тому

      Thanks a lot 🙏🏻 Could you explain a bit more?

    • @heetshah5718
      @heetshah5718 Рік тому +1

      @@Roboflow I am working on Water Pollution Detection Project and I have a dataset of images of different types of pollution, my goal of this project is that I need to train Yolov8 model on that dataset and model should be able to classify the type of pollution.

    • @Roboflow
      @Roboflow  Рік тому

      @@heetshah5718 YOLOv8 have support for classification but it is most likely not the best model you could use.

    • @heetshah5718
      @heetshah5718 Рік тому

      @@Roboflow Can you suggest which models should I use and Can I use this same code for image dataset as well?

  • @tomaszbazelczuk4987
    @tomaszbazelczuk4987 2 роки тому +2

    really good staff!!!

  • @anadianBaconator
    @anadianBaconator 2 роки тому +2

    fantastic!! Would really like to know if this will work for live rtsp url (multiple different camera's) in real-time

    • @Roboflow
      @Roboflow  2 роки тому +1

      We would need to try out, but I think it will :)

    • @anadianBaconator
      @anadianBaconator 2 роки тому +1

      @@Roboflow let us know if you guys try it out. Enjoying the videos

    • @Roboflow
      @Roboflow  2 роки тому +1

      @@anadianBaconator maybe we will manage to include it in one of our upcoming videos

    • @anadianBaconator
      @anadianBaconator 2 роки тому

      @@Roboflow really appreciate it

  • @LubnaObaid
    @LubnaObaid Рік тому +1

    Thank you very much for this useful video, Quick question: can we draw multiple lines for counting in and out traffic over an intersection ?

    • @Roboflow
      @Roboflow  Рік тому +1

      Yes! You can have multiple lines counting moving objects in different areas of the frame ;)

    • @LubnaObaid
      @LubnaObaid Рік тому +1

      Also i want to ask if it is possible to export vehicles tracks ( position on each frame) on a separate excel/ csv file

    • @Roboflow
      @Roboflow  Рік тому +1

      @@LubnaObaid yes it is possible to di it in python, but we do not have any tutorial showing how to do it.

    • @LubnaObaid
      @LubnaObaid Рік тому

      @@Roboflow Is there any specific library or command that you recommend to look for?

    • @Roboflow
      @Roboflow  Рік тому +1

      @@LubnaObaid last time when I did that I used a regular Python CSV package

  • @justin_richie
    @justin_richie 2 роки тому +1

    Is there a way to get rid or the OUT or IN so its just on label on the video?

    • @Roboflow
      @Roboflow  2 роки тому

      So only show counters?

    • @justin_richie
      @justin_richie 2 роки тому +1

      @@Roboflow only to so "Out" or just counters

    • @Roboflow
      @Roboflow  2 роки тому

      @@justin_richie it is not possible now but feel free to create feature request in supervision repo: github.com/roboflow/supervision/issues/new?assignees=&labels=enhancement&template=feature-request.yml

  • @pedrofonte9531
    @pedrofonte9531 Рік тому +1

    I have one question: Since we are trying to count the objects and since the Object's id given by the tracker are unique, why can't we just count the last Id or count the different number of ids?

    • @Roboflow
      @Roboflow  Рік тому

      How do you know how many of them traveled up and how many down?

  • @mvcko5296
    @mvcko5296 Рік тому +1

    Is there any easy way to count objects on pre predicted images? And print results in termina. I have a problem with find solution in internet.

  • @ChristoforosAristeidou
    @ChristoforosAristeidou Рік тому +1

    What if i want to count objects of a specific class only? how can this be implemented? Let's say that we want to count only "car" class and not "truck" also? Can this be done?

    • @Roboflow
      @Roboflow  Рік тому

      sure! you just need to filter detections before passing them through the line counter. You would need to add this line before triggering:
      detections = detections[detections.class_id == YOUR_CLASS]

    • @ChristoforosAristeidou
      @ChristoforosAristeidou Рік тому

      @@Roboflow well i get error: TypeError: 'Detections' object is not subscriptable
      Maybe if i use this filtering before the line counter??
      #class to filter
      mask = np.array([class_id in CLASS_ID for class_id in detections.class_id], dtype=bool)
      detections.filter(mask=mask, inplace=True)
      Let's say instead of CLASS_ID to use [1] so i can keep only class 1

  • @MayHtikeSwe
    @MayHtikeSwe Рік тому +1

    What camera did you use to see the chocolates going?

    • @Roboflow
      @Roboflow  Рік тому +1

      This is stock footage I downloaded from internet :)

  • @shaunjohnson4484
    @shaunjohnson4484 Рік тому +1

    Thank you for the video! What is the specs of your computer? I want to calculate how long it would take to execute this computer vision method on a jetson Nano

    • @Roboflow
      @Roboflow  Рік тому

      I was doing this experiment on google colab. You are pretty much bound to performance of YOLOv8 on Nano. With small model it should be close to real time.

  • @rafael.gildin
    @rafael.gildin Рік тому +1

    Does the same code works for crowd videos ? I’ve been failing to do it.
    Thanks.

    • @Roboflow
      @Roboflow  Рік тому

      It should. But I’d need to see specific result to understand what’s failing.

  • @Factopia4-7
    @Factopia4-7 2 місяці тому +1

    24:23 Attribute Error :Str object has no attribute model😢

    • @Roboflow
      @Roboflow  2 місяці тому

      Did you used latest version of our notebook? colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb

  • @m.tsaqifwismadi4625
    @m.tsaqifwismadi4625 Рік тому

    Great one! very thorough well explained

  • @AIEasySolutions
    @AIEasySolutions 8 місяців тому

    Thank you very much, really appreciate! I applied to my custom video, it does not count correctly. I saw in your video it also does not count correctly, how we can improve it?

  • @tolgaisk8539
    @tolgaisk8539 2 роки тому +2

    How do we count for each class

  • @UltimatedKevin
    @UltimatedKevin 6 місяців тому

    Hello!
    I have a question, how does the model interpret the "out" variable in the candy example? Can it make the difference between if the object is moving to the right or left? Because of how the bounding box is approaching the line?
    And thank you so much for creating this content!

  • @yemibidemi-o5t
    @yemibidemi-o5t Рік тому +1

    Thank you for this video, it's very explanatory. However, the supervision library has been updated, so these codes don't work anymore. I tried to get all those supervision utils from the documentation with little success after a couple of hours. Could you please , make a video dedicated to supervision library alone and where to find those functions and classes and what each one is used for. That will be very helpful. Thank you once again.

    • @Roboflow
      @Roboflow  Рік тому +1

      Take a look here: github.com/roboflow/notebooks/pull/190 it is a PR that updates our vehicle counting notebook to supervision 0.13.0.

  • @blessingagyeikyem9849
    @blessingagyeikyem9849 Рік тому +1

    How do I get the specific time stamp for which the object was early detected in the video

    • @Roboflow
      @Roboflow  Рік тому +1

      We don’t have time analysis support yet in supervision :/

  • @rafael.gildin
    @rafael.gildin Рік тому +1

    Great video, thanks a lot

  • @luisdavidviverosescamilla201
    @luisdavidviverosescamilla201 Рік тому +1

    Hi I have question in this case you don't use deepsort technique for tracking the cars do I?

    • @Roboflow
      @Roboflow  Рік тому

      I use BytetTack. DeepSort is just another tracker that you can use.

  • @SunEside
    @SunEside Рік тому +1

    How can I count cars with two diagonal lines not horizontal lines?? please teach my how to do this

    • @Roboflow
      @Roboflow  Рік тому

      You just need to create two lines and change coordinates of start and end :)

  • @silakanveli
    @silakanveli 2 роки тому +1

    Excellent! Just something I was looking! Thanks Roboflow.
    What was the fps?

    • @Roboflow
      @Roboflow  2 роки тому +1

      We hope you will build something cool using supervision pip package ;)

  • @snehitvaddi
    @snehitvaddi Рік тому +1

    Suuperb... What if I want to detect and track the faulty chocolates in that video and mark the chocolate faulty until it leaves out the frame? Any thoughts on this?

    • @Roboflow
      @Roboflow  Рік тому

      Do you have a model to detect those faults?

    • @snehitvaddi
      @snehitvaddi Рік тому

      ​@@Roboflow No, currently I have a model to detect potatoes on a conveyer belt. For detecting defects I'm thinking of using OpenCV to detect color deviations.
      My problem is since potatoes keep rotating on the conveyer belt, I want to track the defective potato even if it keeps rolling.

    • @snehitvaddi
      @snehitvaddi Рік тому +1

      Hey Peter!
      Any thoughts on this? And also, Just now saw your video on Grounding DINO it looks interesting. What are your thoughts on using it to detect rotten/spoiled potatoes as explained in earlier comments.

    • @Roboflow
      @Roboflow  Рік тому +1

      @@snehitvaddi sorry I missed your comment. If you have images of rotten potatoes you can try if DINO detect it. Sounds like something that should work. Color range is doable as well, just pretty hard to get right color ranges I think :/

  • @ikramessafi9560
    @ikramessafi9560 8 місяців тому

    Thank you ,Could you please explain how to count objects detected in images?

  • @a1mae
    @a1mae 4 місяці тому

    our project is to detect and count the object on the captured photo. can we follow this tutorial? or is there other more applicable tutorial we can follow

  • @situ1984
    @situ1984 Рік тому +1

    show_frame_in_notebook is not working in google colab so i am unable to see the frame

    • @Roboflow
      @Roboflow  Рік тому

      Could you create issue here: github.com/roboflow/notebooks ? I will try to fix that as soon as possible.

  • @aliraxavlogs1156
    @aliraxavlogs1156 Рік тому +1

    What do I do if I only want to detect and track the trucks in the video

    • @Roboflow
      @Roboflow  Рік тому +3

      Hi! 👋You would need to filter out detections by class_id. First, you need to check what is the class_id that represents the truck. I checked that and looks like it is 7. Now you can do something like that: detections = detections[detections.class_id == 7]. :)

  • @himanshubhende3407
    @himanshubhende3407 Рік тому +1

    The information is very simple and explained very clearly. Can you please provide the colab link of Candy detection.

    • @Roboflow
      @Roboflow  Рік тому

      Thanks a lot! It is exactly the same Colab. Only difference is difference model and different video. Code wise it is the same.

  • @Jkfyr99
    @Jkfyr99 2 роки тому +1

    amazing, I learned so much and it help me aswell! do you know if it is possible to use detections from detectron2 instead of yolov8?

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      Hi it Peter from video 👋Tomorrow we will release second video, showing new Supervision features. I have Detectron2 example for you.

    • @Jkfyr99
      @Jkfyr99 2 роки тому +1

      @@SkalskiP Really looking forward to it! Your content is amazing!

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      @@Jkfyr99 I'm recording right now ;)

  • @g.s.3389
    @g.s.3389 2 роки тому +2

    which version of Python3 did you use?

    • @Roboflow
      @Roboflow  2 роки тому

      Google Colab is currently at Python 3.8.10

  • @TarasHoloyad
    @TarasHoloyad Рік тому +1

    Dear friend, Thank you for presenting that great stuff. Is there a way to count the separate types of vehicles crossing the line? Unfortunately, I am not able to handle that, even after creating a separately updated line_counter for each vehicle type inside the for loop. I appreciate any help you can provide.

    • @Roboflow
      @Roboflow  Рік тому

      Hi, could you create a discussion thread here: github.com/roboflow/notebooks/discussions I have a lot of work, but I'll try to help you out.

    • @TarasHoloyad
      @TarasHoloyad Рік тому +3

      @@Roboflow Thank you for suggesting that - nevertheless, I figured it out. Everything works well after adding the following code for each class I want to detect:
      ## Recognition of class 2 (cars)
      detections = Detections(
      xyxy=results[0].boxes.xyxy.cpu().numpy(),
      confidence=results[0].boxes.conf.cpu().numpy(),
      class_id=results[0].boxes.cls.cpu().numpy().astype(int)
      )
      # Masking of undesired classes
      mask = np.array([class_id in CLASS_ID for class_id in detections.class_id], dtype=bool)
      detections.filter(mask=mask, inplace=True)
      #Tracking of Objects
      tracks = byte_tracker.update(
      output_results=ostrukt(detections=detections),
      img_info=frame.shape,
      img_size=frame.shape
      )
      tracker_id = match_detections_with_tracks(detections=detections, tracks=tracks)
      detections.tracker_id = np.array(tracker_id)
      # Extraction of not tracked but recognised objects
      mask = np.array([tracker_id is not None for tracker_id in detections.tracker_id], dtype=bool)
      detections.filter(mask=mask, inplace=True)
      # Labeling of object characteristics
      labels = [
      f"#{tracker_id} {oklassen[class_id]} {confidence:0.2f}"
      for _, confidence, class_id, tracker_id
      in detections
      ]
      # Increasement
      mask = np.array([class_id in [2] for class_id in detections.class_id], dtype=bool)
      detections.filter(mask=mask, inplace=True)
      line_counter_car.update(detections=detections)

  • @nehabhadu4977
    @nehabhadu4977 Рік тому

    For classification into car, bus, truck and motorcycle which one is used ByteTrack or Supervision?
    Additionally, is Bytetrack also used for counting along with tracking? Because supervision is used for annotations.

  • @Reallove555
    @Reallove555 Рік тому +1

    I don't understand how you added numbers to the labels before class name.
    now i see:
    tracker_id = match_detections_with_tracks(detections=detections, tracks=tracks)
    labels = [
    f"#{tracker_id} {CLASS_NAMES_DICT[class_id]} {confidence:0.2f}"
    for _, confidence, class_id, tracker_id
    in detections
    ]

    • @Roboflow
      @Roboflow  Рік тому

      Hi! 👋Could you elaborate on the question? Which part is not clear?

  • @shukkkursabzaliev1730
    @shukkkursabzaliev1730 2 роки тому +2

    @SkalskiP As always amazing job! One problem I am facing inside *match_detections_with_tracks* function, when the object is not in frame and model return _emtpy list_ this line gives error *iou = box_iou_batch(tracks_boxes, detection_boxes)*
    How can I solve it?

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      Hi it's Peter from the video. I just fixed that problem. Could you try the tutorial once again?

  • @OmarHisham1
    @OmarHisham1 2 роки тому +19

    Fun Fact: Tqdm is an arabic word pronounced "Ta-qa-dom", which means progress

    • @SkalskiP
      @SkalskiP 2 роки тому +1

      Hi it's Peter from the video! Wow! I didn't know that. Now you made me look and here is what I found: tqdm derives from the Arabic word taqaddum (تقدّم) which can mean “progress,” and is an abbreviation for “I love you so much” in Spanish (te quiero demasiado).

    • @OmarHisham1
      @OmarHisham1 2 роки тому

      @@SkalskiP didn't know about the Spanish abbreviation,
      Nice informative tutorial btw

    • @jorgehenriquesoares7880
      @jorgehenriquesoares7880 Рік тому +1

      This is, in fact, fun. Thank you.

    • @vm5954
      @vm5954 Рік тому

      Difficult to install though no module ultralytics🙄

  • @kevj1605
    @kevj1605 Рік тому +1

    Is the code just related to one or two test cases/videos? Is it possible to do it for any video in general?

    • @Roboflow
      @Roboflow  Рік тому

      Oh! It should work for any video you want. I already seen so many projects build on top of that code demo. Let me know if that works for your case too!

  • @body1024
    @body1024 2 роки тому +2

    thank you
    very helpful

    • @Roboflow
      @Roboflow  2 роки тому +1

      That's what I wanted to hear!

  • @harqilamiga80
    @harqilamiga80 Рік тому

    hi. can i make a box instead of line, referencing on 17:40? so i want to count an object if that object staying in that box for milliseconds

  • @baharuddindiassaputra6966
    @baharuddindiassaputra6966 Рік тому +2

    Great video, i have a question in this video the linecounter from supervision will increase when all line from prediction box is through the line. Can you change it so just from bottom line / top line??

    • @Roboflow
      @Roboflow  Рік тому

      Ask this question here: github.com/roboflow/supervision Describe what you want to do. We will do our best to help you.

  • @nabi9214
    @nabi9214 Рік тому

    thank you for the tutorial, very easy to understand! I have a question, how do I get the CSV file result to find out the coordinates of the bounding box?

  • @ChirawatNg
    @ChirawatNg Рік тому +1

    Thank you for a very good explanation.
    I found that YOLOv8 has their own tracking command both CLI and python mode.
    I tried on CLI mode, it works well. but unfortunatly in python mode, ID always reset to id 1.
    Now I am thinking of using ByteTrack as you did or do you have any idea of using straight forward way to use YOLOv8 to tracking object.
    Thanks,
    Nott

    • @Roboflow
      @Roboflow  Рік тому

      Yeah we have video on YOLOv8 native tracking. Take a look here: ua-cam.com/video/Mi9iHFd0_Bo/v-deo.html

    • @ChirawatNg
      @ChirawatNg Рік тому +1

      @@Roboflow thank you

  • @NeuralNetwork-go5zn
    @NeuralNetwork-go5zn Рік тому +1

    hello, first of all congratulations for the clarity in the explanations and the passion you put into it! I'm a beginner but these things you showed in the video fascinate me a lot, I kindly wanted to ask you how can I recreate the code on a local IDE so that I can try it with my pc and not on colab and if there is a way to run it in real time.
    I tried but it gives me a series of errors especially when I try to install ByteTrack on anaconda and I don't know how to fix it..
    any help and explanation is greatly appreciated, thank you very much for your time.

    • @Roboflow
      @Roboflow  Рік тому

      Please create a new discussion thread here: github.com/roboflow/notebooks/discussions describe what is the problem and I'll try to help you :)

    • @NeuralNetwork-go5zn
      @NeuralNetwork-go5zn Рік тому

      @@Roboflow I did, now I wait, thanks again!

  • @Jkfyr99
    @Jkfyr99 Рік тому +1

    I have been working with ByteTrack for a bit now, but I have struggled on evaluating its tracking performance do you know if it is possible to check tracking performance of the individual objects using something live MOT metrics?

    • @Roboflow
      @Roboflow  Рік тому +1

      Yes, it is possible but you would need to have annotated data.

  • @jordanlee2839
    @jordanlee2839 11 місяців тому +1

    very helpful video, but my code kept on having error due to the line [ tracks = byte_tracker.update( ] saying "AttributeError: module 'numpy' has no attribute 'float'. " plus when I use the google colab link in the description, i ran the byetrack and it encounters the same error, eventhough i didnt change any code i left it be and it kept on having the same error, so is it an update issue? can you please try and run your code in google colab in the description you gave? Because its seriously not working when i didnt even change any code

    • @Roboflow
      @Roboflow  11 місяців тому +1

      I recommend using this updated version of the notebook: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb

    • @jordanlee2839
      @jordanlee2839 11 місяців тому

      @Roboflow thx a lot, I'll be sure to try this tomorrow

    • @atomix_2402
      @atomix_2402 11 місяців тому

      @@Roboflow is there a lot of difference in two methods ?

  • @MaríaRobertaDevesa
    @MaríaRobertaDevesa Рік тому

    hi! thanks ! its v useful. Can it be applied on cellphones ? like an android or IOS app?

  • @ItzMapJr
    @ItzMapJr Рік тому

    I have followed the code provided and the program runs well as shown in the video. If I want to count vehicles according to each class, such as the number of motorbikes and the number of cars. How and which parts should I change in the code? Thank you

  • @guillermovc
    @guillermovc 2 роки тому +2

    Very Nice explanation bro, is there any possibility to colaborate in supervision development?

  • @KK-ws9rh
    @KK-ws9rh Рік тому

    Thanks! I'm still a beginner, so this is very confusing. So I have one question: I can't use 'best.pt', the model trained with yolov7, in this tutorial, right?

  • @caterinafabbri9212
    @caterinafabbri9212 Рік тому

    Hi Peter, thank you very very much for your video and explanation! It was incredibly helpful.
    I have a question: for the counting why do we need the line? Since we have the tracking and each object has an id, should not be enough to count the unique id? Thank you

    • @malotabi5949
      @malotabi5949 Рік тому

      Can we have the car speed and the postion (x,y,z)

  • @bennguyen1313
    @bennguyen1313 3 місяці тому

    I'd like to use computer-vision + AI, to inspect Printed-Circuit-Boards.. is the best approach one that trains the model on good and bad examples?
    Any thoughts on open source approaches, like PCB-Defect-Detection (YOLOv5, RNCC, etc), PCB-Inspection-OpenCV, versus enterprise tools (KollerFacts Inspection, Intuitive Machines Defect Detection, SVI Defect Analyzer, Cyient Inspection, Mentor Tessent YieldInsight).

    • @Roboflow
      @Roboflow  3 місяці тому

      Could you tell me more about the potential faults that may occur?

    • @bennguyen1313
      @bennguyen1313 3 місяці тому

      @@Roboflow Sometimes the component is not perfectly square on the pads.. or even lifted like a tombstone. Other times the defect is a bit harder to spot, like cold solder, or too much solder that there's a short between pins!

  • @alielbahi4614
    @alielbahi4614 2 роки тому +2

    can you make a full raspberry pi project utilizing a model trained on roboflow

    • @SkalskiP
      @SkalskiP 2 роки тому +3

      Hi it's Peter from the video! Perfect timing! I ordered my Raspberry pi yesterday. So stay tuned because that video is coming soon!

    • @alielbahi4614
      @alielbahi4614 2 роки тому +2

      @@SkalskiP Great, please include as much details as possible

    • @SkalskiP
      @SkalskiP 2 роки тому +2

      @@alielbahi4614 I deployed models like that on NVidia Jetsons, but never on Raspberry, so that tutorial will be most likely zero to hero tutorial :)

    • @alielbahi4614
      @alielbahi4614 2 роки тому +1

      @@SkalskiP still waiting :)

    • @SkalskiP
      @SkalskiP 2 роки тому

      @@alielbahi4614 I'm sure it is comming. We have done blogpost about Raspberry pi deployment. So we will most likely add video. Taht's what we usually do. Stay tuned. Sorry it takes so long.

  • @zaskilovan
    @zaskilovan Рік тому

    Good video. Can you help me, what I can add my custom objects in pre-train dataset?

    • @Roboflow
      @Roboflow  Рік тому

      You would need to train your own custom model to add new classes.

    • @zaskilovan
      @zaskilovan Рік тому +1

      Sorry. I didn't express myself correctly. Can I add my classes to the pre-trained weights. For example, I have my own class Y, and I want the model to recognize both classes with coco dataset and class Y.

    • @Roboflow
      @Roboflow  Рік тому +1

      @@zaskilovan it is possible but would require retraining model too.

  • @minhonvungoc117
    @minhonvungoc117 Рік тому +2

    Thanks for your interesting video. Could you make a video to compare YOLOv8, YOLOv7, YOLOv6 for object detection and object tracking? That would be great!!!

    • @Roboflow
      @Roboflow  Рік тому

      Interesting idea! Do you think it is worth comparing them they are all super close regarding accuracy and speed. What sort of benchmark are you mostly interested in?

  • @johnton96
    @johnton96 4 місяці тому

    super nice video, but probably an update would be amazing since a lot has changed in the repository, right?

  • @qrubmeeaz
    @qrubmeeaz Рік тому +1

    Could you please update this for the new version of Supervision?

    • @Roboflow
      @Roboflow  Рік тому

      You mean notebook or the whole video?

    • @qrubmeeaz
      @qrubmeeaz Рік тому

      @@Roboflow Oh just the notebook would be more than sufficient. It looks like structure of the Supervision library has changed since you posted this, and the notebook doesn't work with the latest version of sv. Many thanks 🙂

  • @luisdavidviverosescamilla201

    Hi I have a question what happen if in this case I would like to put a line in vertical way in the middel of the image how can I solve this problem because in your video you don't show how to obtain these values that you put for trace the line for counting

    • @Roboflow
      @Roboflow  Рік тому +1

      You just need to know your frame dimensions and experiment a bit to find the right fit.

    • @luisdavidviverosescamilla201
      @luisdavidviverosescamilla201 Рік тому

      thanks a lot this comment help me a lot @@Roboflow