Ive developed a fully automatic video editor fort short form content. It would read the frames of the video, images, transcript and apply various effects based on simple text instructions. THIS Meta's new model is THE missing piece to enable editing more complex scenarios in full UA-cam-length videos. Impressive indeed and thank you for sharing this prompty after the release, Sam!
Never been a fan of Zuck. But that's all changing now. What hes doing by *truly* open sourcing everything is game changing for humanity. Thanks Sam! Keep it going Zuckerburg!!!
This model, released in this way, will lead to so many interesting new applications. It would seem that it's use in sports and fitness analysis could be impressive. And even something like traffic analysis, which is currently being done by expensive systems can be done with a consumer camera and open source software. Kudo's to Meta, Mark, and, of course, the OG Sam for letting us all know about it 😁.
I think the concept of transfer learning is really successful for vision models and hence vision models are a great way to explain this larger concept.
Is there a friendly user interface project such as Gradio interface to test SAM2 with video locally? (demo is there of course) but Locally will be nice to test with different hardware since SAM2 supposed to be faster.
Wow! What’s left for OpenAI? How can they still be valued at $70 billion? Again Meta released another large model that allows people to generate synthetic data. I think the moat wasn't the LLM was becoming the one democratise it for everyone and Meta did that!
There are many really good face tracking and preprocessing algorithms out there. SAM2 would only be able to do the "tracking" part. You would still need to do further processing to infer emotions. You could maybe replace the first and last few layers of the architecture with a custom one, freeze the middle parameters, and train on custom data. This last approach would probably give better results.
10gb vram? is posible divide 16 ram with 4 vram nvidia 1070gtx? or cuantatizate uint8 or int4 i don't remember exactly word for presicion :P and you can make example of data from sam2 and make something with florence :)
Does anyone else have trouble installing sam2. I have cuda 12.4 and set my env variable but I'm getting: raise OSError('CUDA_HOME environment variable is not set. ' OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root. Would really appreciate any help
🎯 Key points for quick navigation: 🆕 Meta released the SAM 2 model, enhancing computer vision capabilities with real-time video processing. 🎥 SAM 2 supports video analysis, capable of processing up to 44 frames per second for real-time segmentation and tracking. 🛠️ The model allows segmentation with prompts and has been simplified for easier use, now integrating temporal memory. 🚀 SAM 2 is six times faster than its predecessor and offers improved accuracy and efficiency for data annotation. 📈 Meta has released SAM 2 with open-source code and weights under the Apache 2 license, promoting broader accessibility. 📊 A dataset of 51,000 videos and over 600,000 masklets accompanies SAM 2, aiding in the development of custom models. 🎨 The model can be used for various effects and applications, including real-time video effects and creative annotations. 💻 Example notebooks provided demonstrate how to use SAM 2 for accurate segmentation and tracking in both images and videos. Made with HARPA AI
Sam, help me on an approach. I want to use the RAG for different type of task. Rather than making knowledgebase, I want it to able to differentiate documents as different reports, and compare-contrsat, search across docs type of stuff. What type of pipeline should I follow? All the Rag models are mostly build to make knowledgebase, not compare, a lot of documents with precise output.
you can use meta data to keep their identities separate etc as reports. What else do you want to do with them? You can use query rewriting to get and compare info between the reports etc.
Police are very interested in this, considering masses of video material they have from public places, laws are very bad about this esp in USA with cellphones wiretapping by Stinger & other systems.
Meta releases it's 2024 second quarter results tomorrow. Sounds like they spent a "lot" of money 💰 into 405b and SAM2 training... Open-source licensing all this is maybe their way to balance this 😅
I don't like the fact that you're basically marketing this and not reviewing it you're not pointing out the glaring issues that you see in the video like the ball not being tracked or the eyeball at the Byrd flickering in and out
Ive developed a fully automatic video editor fort short form content. It would read the frames of the video, images, transcript and apply various effects based on simple text instructions.
THIS Meta's new model is THE missing piece to enable editing more complex scenarios in full UA-cam-length videos. Impressive indeed and thank you for sharing this prompty after the release, Sam!
Very cool use and makes total sense. These models are getting so good we are bound mostly by our imagination of how to use them.
@@randotkatsenko5157 you have peaked my intrigue.. fellow nerd..
I had the same idea few months back. But had mercy on my laptop.
Can you share the repo link if its an open source project?
yeah but can you use this SAM2 locally(what hardware you need) and how would you implement in your workflow?
Cool to see Meta is releasing a new and improved version of Sam Altman.
Love the channel. I appreciate content that's not just all LLMs, all the time
Thanks for the feedback, I will probably broaden a the coverage going forward.
This is the kind of video that I shared to demonstrate how fast computer vision available for anyone is advancing!!
Thanks for the video 🎉
Thanks glad it was useful
Thanks for sharing, waiting for a Project using this Model !
Never been a fan of Zuck. But that's all changing now. What hes doing by *truly* open sourcing everything is game changing for humanity. Thanks Sam! Keep it going Zuckerburg!!!
Oh no I will certainly need to add this to my ever-expanding list of AI investigations. Yes this is undoubtedly worthy cheers Sam.
Sam talking about Sam!
Multiverse of Sam😂
lol yes a well named model! 😂
I heard you like Sam, so we got Sam to tell you about Sam so that you can put a Sam in your Sam 😂
You've beat me to it
It's very Meta
This model, released in this way, will lead to so many interesting new applications. It would seem that it's use in sports and fitness analysis could be impressive. And even something like traffic analysis, which is currently being done by expensive systems can be done with a consumer camera and open source software. Kudo's to Meta, Mark, and, of course, the OG Sam for letting us all know about it 😁.
You should start covering vision models as you do LLMs🎉❤
Thanks for the feedback, was wondering if there was interest.
I think the concept of transfer learning is really successful for vision models and hence vision models are a great way to explain this larger concept.
Please cover industrial usecases using CV models as well
Curious are there any techniques you want in particular?
@@samwitteveenaicrack detections e segmentation? Is a kind of standard
Is there a friendly user interface project such as Gradio interface to test SAM2 with video locally? (demo is there of course) but Locally will be nice to test with different hardware since SAM2 supposed to be faster.
This is sick 😮. Thanks for sharing. Missed that.
Impressive indeed. 👍 for Meta for open sourcing the model. I suppose Marc like your name and did a rocket launch😄
Wow! What’s left for OpenAI? How can they still be valued at $70 billion? Again Meta released another large model that allows people to generate synthetic data. I think the moat wasn't the LLM was becoming the one democratise it for everyone and Meta did that!
Meta is killing it right now with fully open models.
Those who aren't following AI news are missing out on all the advancements in this field.
Good video. Thank you for sharing.
More exciting than Llama 405b
This is so next level
good afternoon, please tell me how to download?
Great analysis thanks 👍🙏
is this something that can be run locally on custom video?
Yes
@@CrimsonJacksonD ok
Great walkthrough
Rotoscop with anime?
Should run on the Apple Vision Pro with its M2?
probably but will need to be converted
Access to demo denied? Can you please provide uopdated link to code, thx
Can we run this in normal computer or need a high end one?
Is it ready to use now? Im in Asia right now..
yes you should be able to use it anywhere, it is an open weights model
thx for sharing🎉
how do you think this would do for real-time sentiment analysis on a face?
There are many really good face tracking and preprocessing algorithms out there. SAM2 would only be able to do the "tracking" part. You would still need to do further processing to infer emotions.
You could maybe replace the first and last few layers of the architecture with a custom one, freeze the middle parameters, and train on custom data. This last approach would probably give better results.
10gb vram? is posible divide 16 ram with 4 vram nvidia 1070gtx? or cuantatizate uint8 or int4 i don't remember exactly word for presicion :P and you can make example of data from sam2 and make something with florence :)
Her name is Samantha. Tomorrow is July 31st.
Great for AR
I mean the model is generating segmentation mask so it is generative ai
Does anyone else have trouble installing sam2. I have cuda 12.4 and set my env variable but I'm getting:
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
Would really appreciate any help
You'd need to add CUDA and cuDNN paths to the environment variables on your Windows
🎯 Key points for quick navigation:
🆕 Meta released the SAM 2 model, enhancing computer vision capabilities with real-time video processing.
🎥 SAM 2 supports video analysis, capable of processing up to 44 frames per second for real-time segmentation and tracking.
🛠️ The model allows segmentation with prompts and has been simplified for easier use, now integrating temporal memory.
🚀 SAM 2 is six times faster than its predecessor and offers improved accuracy and efficiency for data annotation.
📈 Meta has released SAM 2 with open-source code and weights under the Apache 2 license, promoting broader accessibility.
📊 A dataset of 51,000 videos and over 600,000 masklets accompanies SAM 2, aiding in the development of custom models.
🎨 The model can be used for various effects and applications, including real-time video effects and creative annotations.
💻 Example notebooks provided demonstrate how to use SAM 2 for accurate segmentation and tracking in both images and videos.
Made with HARPA AI
wow AI Apache helicopter ! ?
I hope the gimp team can utilize this
yeah you could imagine it could do some nice things in that app
Meta beating up OpenAI:
Everyone: "Pleaaaaaase stoooop! He's already unalive!" 😂
Sam, help me on an approach. I want to use the RAG for different type of task. Rather than making knowledgebase, I want it to able to differentiate documents as different reports, and compare-contrsat, search across docs type of stuff. What type of pipeline should I follow? All the Rag models are mostly build to make knowledgebase, not compare, a lot of documents with precise output.
you can use meta data to keep their identities separate etc as reports. What else do you want to do with them? You can use query rewriting to get and compare info between the reports etc.
Ask it to rank or rate instead of compare, if possible. You'll get better results.
Police are very interested in this, considering masses of video material they have from public places, laws are very bad about this esp in USA with cellphones wiretapping by Stinger & other systems.
This is certainly true!
The police already have technology similar to SAM 2, now this will be available to all of us. 😉
i tried but it wasn't compatible with my beastly phone 😂
my android software experience is perfectly customized
Meta releases it's 2024 second quarter results tomorrow. Sounds like they spent a "lot" of money 💰 into 405b and SAM2 training... Open-source licensing all this is maybe their way to balance this 😅
Yeah it would be interesting if the broke out the costs of these models.
Meta slaughtering OpenAI😂
I don't like the fact that you're basically marketing this and not reviewing it you're not pointing out the glaring issues that you see in the video like the ball not being tracked or the eyeball at the Byrd flickering in and out
Because that’s easy to fix silly
Jab at open AI look what they named it lol