How to Choose the Best Computer Vision Model for Your Project

Поділитися
Вставка
  • Опубліковано 1 жов 2024

КОМЕНТАРІ • 36

  • @annamaule7333
    @annamaule7333 Рік тому +9

    Thank you so much for this video! Very informative, complete, and super on top of everything. It is nice to hear your experience and how this is matching the things my team and I are going through: having hardware, model size, speed (fps), and mAP in mind. We also ran into the issue of testing yolov5 and the repo not being an sdk, leading us to bring the repo as a submodule and have to do some hacks around, and re-write the predict script because the repo was not built with integration to 3rd parties in mind! Very very good content and very aligned with my personal experience!

    • @Roboflow
      @Roboflow  Рік тому

      This is awesome to read! I’m super happy that other people see things similar way 🔥

  • @allistech6748
    @allistech6748 Рік тому +1

    Hope this video is not inspired by the discussion we had last week. HA HA HA!!!
    Just kidding, Thanks for the video helped a lot.

  • @rodmallen9041
    @rodmallen9041 7 місяців тому +1

    Badass guidelines....as badass as your looks for this video 😎🤘...tks for sharing

  • @李水欣
    @李水欣 Рік тому +1

    Thank you for your vedio! I am currently struggling with large dataset annotation (around 40,000 images), so I am thinking of semi-supervised methods to do the object detection. But I have no idea how to pick models for both teacher stage and student stage. Would you have any advice on that?

    • @Roboflow
      @Roboflow  Рік тому

      Hi 👋Make sure to take a look at to of our previous videos: ua-cam.com/video/C4NqaRBz_Kw/v-deo.html and ua-cam.com/video/oEQYStnF2l8/v-deo.html. I hope you will find some inspiration there.

  • @g.s.3389
    @g.s.3389 Рік тому +1

    But at the end what would you use ? any examples? ease of use vs libraries or requirements (i.e python3.11 needed...)?

    • @Roboflow
      @Roboflow  Рік тому

      I prefer easy installation and use.

  • @adurks4846
    @adurks4846 Рік тому +1

    The performance is one that always gets us. We still use scaled yolov4 because it performs better than anything else on our datasets. This is ignoring all of our legacy code which makes it difficult to implement newer YOLO models (looking at you yolov8).
    As an aside, does it feel like newer models are more focused towards the COCO dataset? Are researchers "gaming" their architectures to focus specifically on the types of images in COCO ( off-nadir high resolution, high fidelity, well-lit scenes, low # of targets) to get at the top of the leaderboards?

    • @Roboflow
      @Roboflow  Рік тому +1

      I don’t have any proof of that, but when we tested model fine-tuning on custom datasets we noticed that very often models that are bettor on COCO perform worst on custom datasets. It is interesting dynamic.
      As for our model. You are willing to do all of those trade offs and still use YOLOv4. How large is the mAP difference?

    • @adurks4846
      @adurks4846 Рік тому +1

      @@Roboflow Sometimes pretty significant, as much as 10-20% when you compare syolov4 vs yolov5/8. I will say that part of that is due to our focus. We care much more about recall than mAP. I note that sometimes yolov5/8 get better mAP but worse recall even if you drop the thresholds.

  • @milindchaudhari1676
    @milindchaudhari1676 Рік тому

    Hii sir, I'm Milind this side, working on a fruit detection model as my master's thesis project where i have taken around 300 images of the fruits on the trees. Now i need to annotate them but I'm experiencing the scenario where, the fruits are occluded by the leaves and are overlapping with each other as well, as currently no-one is guiding me in dealing with such cases, I'm getting tensed in annotating my images, i would like to seek your guidance regarding the same. Please help me out with a reply...!

  • @mrbot4one
    @mrbot4one Рік тому

    What about the SAM , is it fast enough to compare with these models in terms of accuracy,precision,fast
    and lightweight???

  • @youssefkhaled5331
    @youssefkhaled5331 Рік тому

    Thnx for the content, Can I know how to open webcam in yolov7 method in colab I try hard but I get nothing thanks again.

  • @绍琪樊
    @绍琪樊 Рік тому +1

    I was wondering is there any chance that we can convert all the bounding boxes in the image into polygon all together? and vice versa. if yes it will be really helpful.

    • @绍琪樊
      @绍琪樊 Рік тому +1

      oh i saw it thanks really appreciate it

    • @Roboflow
      @Roboflow  Рік тому +1

      @@绍琪樊 yes we absolutely can!

  • @Mias-v8h
    @Mias-v8h Рік тому

    Hi, also from my side, thanks a lot for this and all the other awesome and really helpful videos! I have a question regarding 'my' specific issue (sorry if this is not the right platform to ask this): I have a very small dataset (ca. 100 images) that I would like to use for object detection. It's ghostnets on sonar images - visually similar to the concrete cracks dataset that you used in another video. I tried Yolov5 with weight transfer & fine-tuning which already works ok, but am not sure about it. Would you have a suggestion for me on what to do? Just try around with hyperparameters and fine-tuning, use another model etc? Thanks a lot in advance! Mia

  • @rachealcr6752
    @rachealcr6752 Рік тому +1

    I just wonder why using same model, settings in 2 versions in roboflow and train in google colab but the results of mAP, accuracy and recall varies a lot about 40% of difference.

    • @Roboflow
      @Roboflow  Рік тому

      Interesting. Could you let me know what we’re the hiperparams you used in notebook?

    • @rachealcr6752
      @rachealcr6752 Рік тому

      @@Roboflow 100 Epochs others remain the same with custom dataset

    • @Roboflow
      @Roboflow  Рік тому

      @@rachealcr6752 which size of the model you trained in colab? And what training option you chose in UI?

    • @rachealcr6752
      @rachealcr6752 Рік тому

      @@Roboflow yolo v8s. PREPROCESSING
      Auto-Orient: Applied
      Resize: Stretch to 640x640
      AUGMENTATIONS
      Outputs per training example: 3
      Flip: Horizontal
      Noise: Up to 5% of pixels

  • @willmarsman1765
    @willmarsman1765 Рік тому

    Thanks for reviewing licenses in this context; I've noticed licenses are quite complicated for models as compared to other software projects. for example, the super-gradients project has two licenses, one which applies to the model and another the project overall. the model license also appears completely custom. I hope in the future we will see a consolidation of licenses around publicly shared models.

  • @dahiruibrahimdahiru2690
    @dahiruibrahimdahiru2690 Рік тому

    Nah mahn, where has this channel been all this while

  • @fazlehasan9428
    @fazlehasan9428 Рік тому

    It is the best video on model selection keep making videos like it

  • @lemonbitter7641
    @lemonbitter7641 Рік тому

    That talking to gpt was hilarious 😂

  • @IntelligentQuads
    @IntelligentQuads Рік тому +1

    Go spurs go!

    • @Roboflow
      @Roboflow  Рік тому +1

      Maybe next year. 😅

  • @diogoalves...
    @diogoalves... Рік тому +1

    Great video, Peter! It would be nice if you could provide us with a summary table sometime in the future. Something that includes columns such as usability, portability, available customization parameters, license, latency, etc.
    Additionally, a follow-up with an evaluation template be greatly appreciated. It would help us compare our fitted models effectively.
    Congratulations! The content was truly excellent.

  • @body1024
    @body1024 Рік тому +1

    keep it coming 😍

    • @Roboflow
      @Roboflow  Рік тому

      Thanks a lot for kind words 🙏🏻

  • @st43r62
    @st43r62 Рік тому +1

    the bestest!

    • @Roboflow
      @Roboflow  Рік тому

      Thanks a lot! 🙏🏻

  • @techradar6787
    @techradar6787 Рік тому +1

    Useful ❤❤❤