Fast Zero Shot Object Detection with OpenAI CLIP

Поділитися
Вставка
  • Опубліковано 17 гру 2024

КОМЕНТАРІ • 33

  • @BradNeuberg
    @BradNeuberg Рік тому +5

    Since this video was released, it looks like the image rescaling assumptions of the CLIP model being used has changed. In the existing code in this videos notebook when the image is fed to the processor() function, it’s values have been scaled to 0-1. Unfortunately this breaks some newer CLIP assumptions. Everything will break for you, so you should add big_patches*255. before passing it into the processor() call for things to work correctly.

    • @drewholmes9946
      @drewholmes9946 11 місяців тому +1

      @jamesbriggs Can you pin this and/or update the Pinecone article?

  • @rogerganga
    @rogerganga Рік тому +2

    Hey James! As someone with 0 coding experience in Computer Vision and new to OpenAI's clip, I found this video incredibly valuable. Thank you so much!

  • @ceegee9064
    @ceegee9064 2 роки тому +1

    What an incredibly approachable breakdown of a very complicated topic- thank you!

  • @manumaminta6131
    @manumaminta6131 Рік тому +3

    Hi! Love the content. I was just wondering, since we are passing patches of images to the CLIP Visual Encoder (and each patch has a dimension), does that mean we have to resize the patch size so that it fits the input dimension of the CLIP visual encoder? :) Looking forward to your reply

  • @hariypotter8
    @hariypotter8 Рік тому +2

    Using your code line for line I'm having trouble with this, no matter what prompt I use my output image looks the exact same in regards to localization and the dimming of patches based on score. It looks like I'm only seeing the most frequently visited patches rather than the highest CLIP score. Any ideas?

    • @ITAbbravo
      @ITAbbravo Рік тому +1

      I might be a bit late to the party, but it seems that the major issue is that the variabile "runs" is initialized with torch.ones instead of torch.zeros. The localization is still not as good as the one in the video though...

  • @andrer.6127
    @andrer.6127 Рік тому +1

    I have been trying to figure out how to change it from one class and one instance to one class and many instances, but I can't seem to figure out how to do it. What should I do?

  • @AthonMillane
    @AthonMillane Рік тому +1

    Hi James, thanks for the fantastic tutorial. How do you think this would work for e.g. drawing bounding boxes around multiple books on a bookshelf. They are next to each other, and so the image patches will all probably correspond to "book" but which individual book is not clear. Would making the patches smaller improve things? Any ideas how to address this use case would be much appreciated. Cheers!

  • @henkhbit5748
    @henkhbit5748 2 роки тому

    Realty amazing the advances in ai . Thanks for showing the hybrid approach for "object detection" using text👍

    • @jamesbriggs
      @jamesbriggs  2 роки тому

      Glad you liked it, I'm always impressed with how quick things are moving in AI, it's fascinating

  • @andy111007
    @andy111007 Рік тому

    Hi James, how did you create the dataset. Did you need to do annotation of images? convert to yolo or coco format? before forming the dataset? love to hear more? . Thanks,
    Ankush Singal

  • @stevecoxiscool
    @stevecoxiscool Рік тому

    What models and technology would one use to "scan" a directory of images and then text of what the model found in each image ?

  • @hridaymehta893
    @hridaymehta893 Рік тому

    Thanks for your efforts James!
    Amazing video!!

  • @Helkier
    @Helkier Рік тому

    Hello James, the colab link is not available anymore in your pinecone article

  • @hchautrung
    @hchautrung Рік тому

    Might I kow the total runtime If we put in a production mode?

  • @khalilsabri7978
    @khalilsabri7978 Рік тому

    thanks for the video, amazing work !!!

  • @AIfitty-xs7qn
    @AIfitty-xs7qn Рік тому

    Hello James! I have a use case for CLIP. I think. If it works. I am not a computer programmer and have never used colab, but I have a few months to learn if learning all that can be done in that amount of time. I also have about 30k-40k photos that I would like to tag every day in the summer - tagged either blue shirt or white shirt (sports). Every tutorial I have seen uses a data set that is located online. Can I direct clip to my local server to perform object detection? Do the photos need to be in any particular format for optimum results? Well. Let me back up. Can you direct me to a resource that will give me the background I need to be able to follow along with you in these videos? After that, I should be able to ask more relevant questions. Thank you for the videos!

  • @SinanAkkoyun
    @SinanAkkoyun Рік тому

    Thank you! Can you get the vectors right out of CLIP without supplying a prompt? So, that you get embeddings for every patch and then can derive what is being detected?

    • @jamesbriggs
      @jamesbriggs  Рік тому +1

      You can get embeddings but they’re after the clip encoder stage, the image patches are what are fed into the model and aren’t very easily interpretable - it’s the clip encoding layers that encode ‘meaning’ into them

  • @lorenzoleongutierrez7927
    @lorenzoleongutierrez7927 Рік тому

    Great tutorial ! 👏

  • @TheArkLade
    @TheArkLade Рік тому

    Does anyone know why [IndexError: list index out of range] appears when trying to detect more than 2 objects? For example: detect(["cat eye", "butterfly", "cat ear"], img, window=6, stride=1)

  • @abhishekchintagunta8731
    @abhishekchintagunta8731 2 роки тому

    excellent explanation kudos james

  • @Ahmad-H5
    @Ahmad-H5 2 роки тому

    Hello, thank you so much for creating this video it is quite easy to follow for a beginner like me☺. I was also wondering if clip can connect images to text instead text to images.

    • @jamesbriggs
      @jamesbriggs  2 роки тому

      Yes 100% - after you process the images and text with CLIP it just outputs vectors, and with vector search it doesn't matter whether those were produced from text or images, see here:
      ua-cam.com/video/fGwH2YoQkDM/v-deo.html
      Hope that helps!

  • @shaheerzaman620
    @shaheerzaman620 2 роки тому

    Fascinating!

  • @papzgaming9412
    @papzgaming9412 Рік тому

    Thanks

  • @andy111007
    @andy111007 Рік тому +1

    The code does not work for forming bounding box around object localization