Accelerate Image Annotation with SAM and Grounding DINO | Python Tutorial

Поділитися
Вставка
  • Опубліковано 1 жов 2024

КОМЕНТАРІ • 102

  • @SadiyaRasool-x2c
    @SadiyaRasool-x2c 21 день тому +1

    Hi! I was wondering if you could let me know how i can use custom images to detect different objects (other than the labels already in the notebook, like camera, hat, light, etc) and how to add their labels so they can be detected
    I'm a beginner in this field and would really appreciate the help!

  • @monkeywrench1951
    @monkeywrench1951 Рік тому +2

    I wonder if segment anything can be accelerated or if even it it would run in the google coral edge accelerator.

    • @Roboflow
      @Roboflow  Рік тому +1

      I heard you can use OpenVINO to run it on CPU. As long as it is Intel CPU.

  • @chinnagadilinga5742
    @chinnagadilinga5742 Рік тому +2

    Hi Sir I'm Beginner in I saw your Computer vision video's its fully combined and merged can you please update one by one video order that time we can understand easily thank you.

    • @Roboflow
      @Roboflow  Рік тому

      Hi, it is Peter from the video? Do you mean videos related to zero-shot annotations?

  • @adolfusadams4615
    @adolfusadams4615 Рік тому +3

    Hey Peter, could you do a video showing how to integrate SuperGradients/Yolo NAS with Roboflow's Autodistill for custom detections on a live real-time webcam feed.
    Could you also show maybe in another video how to add custom objects to an existing dataset like the coco dataset?
    This would be Epic.🔥

  • @mentarus
    @mentarus Рік тому +1

    Great video and notebook! However it looks like supervision install step fails with: groundingdino 0.1.0 requires supervision==0.4.0

  • @Aziz-bg4ph
    @Aziz-bg4ph Рік тому +1

    How can I extract the segmented object produced by SAM?

    • @Roboflow
      @Roboflow  Рік тому

      Masks are stored here `detections.mask`.

  • @kobic8
    @kobic8 Рік тому +1

    I have noticed you use in the supervision awesome package a method to load datasets in PASCAL-VOC format, are you planning to also support COCO formats (also for export?)?

  • @kaisbedioui7456
    @kaisbedioui7456 Рік тому +2

    As always a very cool video!
    Really curious to see Autodistill tool🎉
    Does smart polygon tool leverage SAM as well?

    • @Roboflow
      @Roboflow  Рік тому +2

      Yes it is! We are running SAM in smart polygon since last week 🔥

  • @kategeorge1152
    @kategeorge1152 Рік тому +1

    Any chance for a tutorial on SAM and Roboflow and remote sensing of satellite or uav imagery?

    • @Roboflow
      @Roboflow  Рік тому

      Please tel me more about the idea? What would you like to see?

  • @alassanesakande8791
    @alassanesakande8791 Рік тому +2

    Incredible video ! I was just reading the Grounded-SAM this morning, and boum you're making a tutorial on it. Great job ! I'm just wondering if I could find ways to use it in a medical imagery task ! What do you think ?

    • @Roboflow
      @Roboflow  Рік тому +1

      You want to do full auto or bounding box to mask?

    • @alassanesakande8791
      @alassanesakande8791 Рік тому +1

      @@Roboflow I would go for automatic segmentation but I'd also like it to be interactive for the user. So maybe combining the two would more appreciated

    • @Roboflow
      @Roboflow  Рік тому +2

      @@alassanesakande8791 that is our plan for next stage. Allow full auto or human in the loop :) I also think that being able to interactively interact with those labels before you use them to train for example YOLOv8 is required.

  • @shamukshi
    @shamukshi Рік тому

    for "solar panel counting from UAV image"...which approach is better ? 1. creating bounding box (BB) for solar panel using object detection model and then using BB as input for SAM....or.... 2. segmenting everything in the image from SAM...and then classifying each segment as solar panel and non solar panel.

  • @harumambaru
    @harumambaru Рік тому +1

    Thank you so much for the video explanation. The walk through makes all the difference. For example that 5:53 prompt engineering explanation is so useful.

  • @bb-andersenaccount9216
    @bb-andersenaccount9216 Рік тому +2

    I guess that it would be great to include in both supervision and autodistill a feature that gets the bounding box given a polyline segmentation from sam

    • @Roboflow
      @Roboflow  Рік тому +1

      we have that already! supervision - roboflow.github.io/supervision/detection/utils/#mask_to_xyxy

  • @hyunseungshin3955
    @hyunseungshin3955 Рік тому +1

    Great tutorial!! Is it possible to real time video? something like a webcam?

    • @Roboflow
      @Roboflow  Рік тому

      Thanks a lot! 🙏🏻 model is to slow to run in real time :/ the whole inference for single frame can take around 1-2 seconds.

  • @adriancontrerasgarcia7968
    @adriancontrerasgarcia7968 7 місяців тому

    Can I convert a multiclass object detection dataset to a segmentation dataset with this? I have only seen the example with the single class Blueberries dataset so im not sure.

  • @heetshah5718
    @heetshah5718 Рік тому +1

    I am currently working on pollution detection and classification system project, can I use GDINO and Sam for the same?

    • @Roboflow
      @Roboflow  Рік тому

      What would that be? Images of smoke for example?

    • @heetshah5718
      @heetshah5718 Рік тому

      @@Roboflow Images of plastic underwater and Oil Pollution in water

  • @kobic8
    @kobic8 Рік тому +1

    great tutorial! can you post the link to the jupyter notebook in the vid bio?

    • @Roboflow
      @Roboflow  Рік тому

      It is in the description. But here is the link: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/automated-dataset-annotation-and-evaluation-with-grounding-dino-and-sam.ipynb

  • @ranpinc
    @ranpinc Рік тому +1

    Thank you for your work, this is exactly what we need urgently, but at the moment I see that it seems to only support saving data in Pascal voc format, do you have any plans to provide an api to convert it to coco format?

    • @Roboflow
      @Roboflow  Рік тому

      Currently the order is YOLO and than COCO. But it might happen next week.

    • @ranpinc
      @ranpinc Рік тому

      @@Roboflow that's cool! the soon the better, thank you for your work again!

  • @gbo10001
    @gbo10001 Рік тому +1

    that's really great waited for that!!. btw why there is no support for tracking annotations formats like MOT/MOTS

    • @Roboflow
      @Roboflow  Рік тому

      I know it took me a lot of time... But this was possibly the most complicated Jupyter Notebook I ever made.

    • @gbo10001
      @gbo10001 Рік тому +1

      @@Roboflow that's it really great contribution for the community😎 thanks for that

    • @Roboflow
      @Roboflow  Рік тому

      @@gbo10001 we are working on something even beeeeter! 🔥

    • @Roboflow
      @Roboflow  Рік тому +1

      @@gbo10001 hahaha better than SAM + DINO

  • @dilshodbazarov7682
    @dilshodbazarov7682 Рік тому +1

    Awesome tutorial!!!
    But while I am running during 6:25, I got error: "NameError: name '_C' is not defined" (after long error description). Anyone can help?

    • @Roboflow
      @Roboflow  Рік тому

      Could you give me a bit more info? Do you run it in Google Colab?

    • @thegodofrotation-animeamvs7204
      @thegodofrotation-animeamvs7204 Рік тому +1

      @@Roboflow I have the same error. I ran the colab from top to bottom and got this error at the first annotation part on the line detections = grounding_dino_model.predict_with_classes(..
      Any help would be appreciated!

    • @Roboflow
      @Roboflow  Рік тому

      @@thegodofrotation-animeamvs7204 I'll do my best to take a look at that. Could you submit new issue here: github.com/roboflow/notebooks/issues

    • @mhdemadeddinaldoghry1851
      @mhdemadeddinaldoghry1851 9 місяців тому

      Any update?

  • @kobic8
    @kobic8 Рік тому +1

    thank to this great vid (and notebook) I have tried using it together with SAM and I'm curious to know how can I use a labeled dataset I have (of sea-objects) to learn the model to detect not only a boat/ship but to identify the name of the marine-vessel.

    • @Roboflow
      @Roboflow  Рік тому +1

      Do you have labels for marine-vessel in your dataset? Or only boat/ship?

    • @kobic8
      @kobic8 Рік тому +1

      @@Roboflow thanks so much for the reply! am really trying to figure out how to solve this issue: yes! I do have human-labeled dataset for specific classes of marine-vessels e.g., frigatte, corvette, and also some ships with their specific names. My question was if there is a way to fine-tune the grounded-DINO model to identify the objects not as "boat" or "ship" but on more accurate labels

    • @Roboflow
      @Roboflow  Рік тому +2

      @@kobic8 yes it probably is possible, but you would be much better of if you train model like YOLOv8. Power od GroundingDINO comes from zero shot detection - ability to detect objects that it never saw. If you already have annotated dataset, just train regular object detection model. :)

    • @kobic8
      @kobic8 Рік тому

      @@Roboflow but it be "less powerfull" compared to G-DINO, I just thought to tune G-Dino to refine specific labels, so I tought it be btter to somehow get the traning code

  • @praveen9083
    @praveen9083 Рік тому +2

    wow... excited for the auto distill! :)

    • @Roboflow
      @Roboflow  Рік тому

      That’s what I wanted to hear 💜

  • @lorisdeluca610
    @lorisdeluca610 Рік тому +1

    It's a very cool concept and surely helpful for some segmentation tasks. However, I see this working mainly with clear and not crowded images. With many tests I did, quite often a lot of items were mislabeled. Nonetheless cool idea and love the channel!

    • @Roboflow
      @Roboflow  Рік тому +4

      Absolutely! But keep in mind that 3 years ago it was impossible. We just try to highlight cutting-edge models in 2023. I absolutely agree. We are not yet able to get good results for every image.

  • @sebbecht
    @sebbecht Рік тому +1

    Hey there! I really like these videos a lot. Certainly with fast labelling the specific task can be trained supervised. But is there an opportunity in using SAM and/or DINO as a teacher for distillation into a smaller (final) model, even before creating an annotated dataset? Would this be competitive with other self-supervised pretraining methods?

    • @Roboflow
      @Roboflow  Рік тому

      Hi 👋🏻 you mean SAM and GDINO would generate training examples on the fly during the training?

    • @Roboflow
      @Roboflow  Рік тому

      @@sebbecht we didn't explore that rout yet but it would be awesome to test those theories. Thanks for sharing :) I never run out of ideas thanks to conversations like this.

    • @sebbecht
      @sebbecht Рік тому +1

      @@Roboflow my pleasure, I hope you get to explore and share some findings!

    • @Roboflow
      @Roboflow  Рік тому

      @@sebbecht stay tuned :)

  • @kobic8
    @kobic8 Рік тому +1

    in you previous video on grounding dino, you elaborated on a text prompt as an input, can this be implemented here as well? are you planning on extending this tutoorial (or notebook) to show how to implement it? also, I have noticed that you can also implement stable diffusion tools such as "change do to a monkey". can that also be in the next vid?

    • @Roboflow
      @Roboflow  Рік тому +1

      Auto labeling with prompts will be part of the auto-distill package that is coming soon. As for stable diffusion, I can't promise anything :/ We have a lot of stuff in the backlog. But maybe I'll play with it on Twitch stream.

    • @kobic8
      @kobic8 Рік тому +1

      @@Roboflow thanks a lot! any estimation regarding the release date of auto-distill?

    • @Roboflow
      @Roboflow  Рік тому

      @@kobic8 it is close! Reaaaaaaaly close!

    • @Roboflow
      @Roboflow  Рік тому +2

      @@kobic8 don't want to over promis but I heard something about today :)

  • @olanrewajuatanda533
    @olanrewajuatanda533 Рік тому

    I keep getting error messages whenever I used some of the images in my dataset

  • @cyberhard
    @cyberhard Рік тому +2

    Nice! Looking forward to seeing the new library in action.

    • @Roboflow
      @Roboflow  Рік тому +1

      I’ll do my best to not disappoint you ;)

  • @snehitvaddi
    @snehitvaddi Рік тому +1

    Hey Peter! Can I use the SAM labelling for object detection as well? or is it only for instance segmentation?

    • @Roboflow
      @Roboflow  Рік тому

      You can always convert segmentation into detection. It is just a bit hm... poor usage of resources as it is super time-consuming. What project do you have in your mind?

    • @snehitvaddi
      @snehitvaddi Рік тому +1

      ​@@Roboflow I'm working on detecting potato quality on a conveyer belt. I labeled some photos using SAM, but I'm not sure if the polygon labeling actually helps object detection or if a basic rectangle boundary will enough.

    • @Roboflow
      @Roboflow  Рік тому +1

      @@snehitvaddi yes, for modern models like YOLOv8 it helps: blog.roboflow.com/polygons-object-detection/

    • @snehitvaddi
      @snehitvaddi Рік тому +1

      @@Roboflow cool, thanks

    • @Roboflow
      @Roboflow  Рік тому +1

      @@snehitvaddi use the one thet is faster to annotate? Polygons can be converted to boxes really easily.

  • @_ABDULGHANI
    @_ABDULGHANI Рік тому +1

    Thank you this is exactly what I was waiting for.

    • @Roboflow
      @Roboflow  Рік тому

      I love to hear that! 🔥

  • @aipp-pe8ud
    @aipp-pe8ud 7 місяців тому

    How to remove white borders from generated images?

    • @Roboflow
      @Roboflow  7 місяців тому

      Use cv2.imwrite to save the image on drive www.geeksforgeeks.org/python-opencv-cv2-imwrite-method/amp/ and manually download.

  • @deentong5311
    @deentong5311 Місяць тому

    6:45 what if I want to detect the umbrella above

    • @deentong5311
      @deentong5311 Місяць тому

      Or each of the lights in the umbrella

  • @BilalHaider-h8f
    @BilalHaider-h8f Рік тому

    Can it be used to annotate for semantic segmentation or only instance?

  • @lorenzoleongutierrez7927
    @lorenzoleongutierrez7927 Рік тому +3

    Great job as usual!

    • @Roboflow
      @Roboflow  Рік тому

      Thanks a lot! 🙏 we are not slowing down

  • @moorthyedec
    @moorthyedec Рік тому

    Hi anything for cancer cell application

  • @johnpoc6594
    @johnpoc6594 Рік тому

    Very nice video and explanation, thank you very much!

  • @Samiksha-v1l
    @Samiksha-v1l Рік тому

    Hi, Can this also be implemented on custom objects, if so how to implement it

    • @Roboflow
      @Roboflow  Рік тому

      What do you mean by custom object?

  • @patrickwasp
    @patrickwasp Рік тому

    Can you combine separate polygons into a single object?

  • @newahmeddresses
    @newahmeddresses Рік тому

    You're awesome man, thank you so much

  • @body1024
    @body1024 Рік тому +1

    thank you so much 😍

    • @Roboflow
      @Roboflow  Рік тому

      Thanks for watching! :)

  • @tomaszbazelczuk4987
    @tomaszbazelczuk4987 Рік тому +1

    Awesome video as usual😮👍

    • @Roboflow
      @Roboflow  Рік тому

      Thank you very much… doing my best 🙏🏻

  • @saharabdulalim
    @saharabdulalim Рік тому

    thank u for this incredible vid !💖 but I have a question, when trying to run the following command it told me that " 41 detections.mask = segment(sam_predictor=sam_predictor, image=image, xyxy=filtered_detections.xyxy)
    42
    43 mask_annotator = sv.MaskAnnotator()
    NameError: name 'segment' is not defined "
    and I search for the __init__ in SAM but there isn't found, so is this function is built in sam_anything module or should I wrote it ?

    • @saharabdulalim
      @saharabdulalim Рік тому

      i replaced this command of yours
      from tqdm.notebook import tqdm
      for image_name, image in tqdm(object_detection_dataset.images.items()):
      detections = object_detection_dataset.annotations[image_name]
      detections.mask = segment(
      sam_predictor=sam_predictor,
      image=cv2.cvtColor(image, cv2.COLOR_BGR2RGB),
      xyxy=detections.xyxy
      )

    • @Roboflow
      @Roboflow  Рік тому

      Looks to me like you didn’t run all cells in notebook. Segment function is defined in one of the cells in notebook. No need to change the code.

    • @saharabdulalim
      @saharabdulalim Рік тому +1

      @@Roboflow oh I see, thanks, it had been solved. can I ask another question? my dataset is into coco format as it on my PC not roboflow so I converted it into pascal format to be able to follow your steps from converting to segmentation but it didn't work at all. is it a function in supervision to read coco format like pascal? as I searched but it give me errors

    • @Roboflow
      @Roboflow  Рік тому +1

      @@saharabdulalim hi! We ant to add COCO loading to supervision but it won't happen to soon :/ if you wan to follow those steps now I'd upload dataset to Roboflow. That's probably the fastest way for now.

    • @saharabdulalim
      @saharabdulalim Рік тому

      @@Roboflow is it possible to upload the whole dataset to RoboFlow?
      without annotate every image as I have already the annotation file

  • @kamaraalhassanshaike1625
    @kamaraalhassanshaike1625 Рік тому +1

    Wow , this is fantastic

  • @zes7215
    @zes7215 Рік тому

    wrg