Nice video about a nice model. About the dataset however, it should be made clear, that the largest part of the 1B-dataset is not curated but merely fully automatically annotated by SAM, after it was trained iteratively with a much smaller dataset (around 160k images) of human annotated images. The dataset can be browsed freely on their site, and it doesn't take long to see that a lot of images are missing complete and consistent annotations.
Hello @WhatsAi, I really enjoyed your video! I recently started using SAM and I'm amazed by how precisely it works. I do have a question about the Demos though. In the video, there are two options shown for both semantic and image tasks: 'Add Mask' and 'Remove Areas.' While I understand 'Add Mask,' I'm a bit confused about what 'Remove Areas' does. It would be immensely helpful if you could explain the functionality of 'Remove Areas' in more detail. Thank you in advance! 🙏🙏
Yes it is available! It’s mostly for researchers since it was trained with private data but we can use it (use the code!) You can also upload images on the demo to try it
For practical applications for me it just needs to be made faster. Currently it doesn’t seem capable of running real-time. Which is a shame because otherwise it’s a dream come true.
That is huge compliment though I highly value Two Minute content as well! I just prefer keeping it a bit more « technically detailed » even though I try my best to make it simple! :)
@@WhatsAI Two Minutes Paper *was* interesting, many years ago. Now it's all memes and no technical content whatsoever. It's all "wow, hold onto your papers and squeeze, much big numbers wow". No explanation, no nothing. Just "look!! big numbers!!! what a time to be alive!!!" So yeah, thank you for filling that role. A short digest of interesting papers / models is a very good thing to anyone. After watching such a video I can quickly decide whether making the effort to actually read the paper is worth my time. Thank you for that. Obviously subscribed.
I must admit I haven’t watched his videos for a while now since I’m doing some myself haha, so I will trust you on that! And that’s fantastic! You described the goal of these videos so I’m really glad it worked for you! 😊
I agree. Definitely not nearly as advanced as a fine tuned background removal algorithm that we are now getting used to, but remarkably good for « easier tasks » where we do not need such perfection!
References:
►Read the full article: www.louisbouchard.ai/meta-sam/
►Paper: Kirillov et al., Meta, (2023): Segment Anything, ai.facebook.com/research/publications/segment-anything/
►Demo: segment-anything.com/demo
►Code: github.com/facebookresearch/segment-anything
►Dataset: segment-anything.com/dataset/index.html
Nice video about a nice model.
About the dataset however, it should be made clear, that the largest part of the 1B-dataset is not curated but merely fully automatically annotated by SAM, after it was trained iteratively with a much smaller dataset (around 160k images) of human annotated images. The dataset can be browsed freely on their site, and it doesn't take long to see that a lot of images are missing complete and consistent annotations.
Hello @WhatsAi, I really enjoyed your video! I recently started using SAM and I'm amazed by how precisely it works. I do have a question about the Demos though. In the video, there are two options shown for both semantic and image tasks: 'Add Mask' and 'Remove Areas.' While I understand 'Add Mask,' I'm a bit confused about what 'Remove Areas' does. It would be immensely helpful if you could explain the functionality of 'Remove Areas' in more detail. Thank you in advance! 🙏🙏
I thought segment anything only works with images, how did you use segment anything to segment those cows at the beginning of your video?
is this possible for the AI Model to run on Oculus or any other HMD Device?, Or have you tried?
Great for satellite images and drone shots
wow that is pretty accurate, are they planning release full version? I mean will they make it powerful free annotation tool
Yes it is available! It’s mostly for researchers since it was trained with private data but we can use it (use the code!)
You can also upload images on the demo to try it
@@WhatsAI are we able to retrain the algorithm on our data?
can you do a tutorial on how to download the model to juypyter notebook and start using it? I am following the instructions but keep getting errors.
What are some of the applications excluding photoshop/advertising?
My own research for example, brain segmentation! It’s useful for pretty much every system that includes a camera !
For practical applications for me it just needs to be made faster. Currently it doesn’t seem capable of running real-time. Which is a shame because otherwise it’s a dream come true.
how slow is it?
Cool, I want to do text to game lol, I be like make game like fable on Xbox set on earth in 1700 America. Then edit it and make my family live lol.
I tried uploading a random image to the segment model it was awful at doing it.
Would love to see what the image was like. Was it of pretty bad quality and bad lighting?
Thank you. You're managing to do what Two Minutes Paper doesn't: being interesting.
That is huge compliment though I highly value Two Minute content as well! I just prefer keeping it a bit more « technically detailed » even though I try my best to make it simple! :)
@@WhatsAI Two Minutes Paper *was* interesting, many years ago. Now it's all memes and no technical content whatsoever. It's all "wow, hold onto your papers and squeeze, much big numbers wow". No explanation, no nothing. Just "look!! big numbers!!! what a time to be alive!!!"
So yeah, thank you for filling that role. A short digest of interesting papers / models is a very good thing to anyone. After watching such a video I can quickly decide whether making the effort to actually read the paper is worth my time. Thank you for that. Obviously subscribed.
I must admit I haven’t watched his videos for a while now since I’m doing some myself haha, so I will trust you on that!
And that’s fantastic! You described the goal of these videos so I’m really glad it worked for you! 😊
Imagine Two Minutes Paper replying here.
@@unknownstoneageman81 maybe then he'd start thinking at his content again and realize the last time he actually explained an algorithm was years ago
I did noticed that the resolution is high, but the fine detailed its not here.
the dog jumping example is a good one, the couse segmentation is here but the fine detailed like the fur is messed up
I agree. Definitely not nearly as advanced as a fine tuned background removal algorithm that we are now getting used to, but remarkably good for « easier tasks » where we do not need such perfection!