Bro, this is nothing big but I was the person who generated the video at 16:32 Lol! While watching your video and that one came up it kinda had me shook because I generated it some days ago and never shared it publicly anywhere! Came back to the website and realized they picked that video to feature it on the 'Featured' tab Lol.
As good as these are, I'm waiting until computing power becomes affordable and good enough for average folks like me to download these types of generators to create uncensored content on our own. I do not want to pay for access to a "website" that decides what I am allowed to create. Makes no sense.... If I'm paying money, I want to use the product as I please.
On a side note, I'm glad you put "sponsored" in your summary instead of pretending it wasn't sponsored. I still watched the entire video. Nothing irks viewers more than content creators pretending the video they are showing is NOT something sponsored..... especially when the content comes across as sponsored.
Yeah, for all we know, it makes this look like caveman stuff. Maybe the processing power to allow people by the masses to use it simply isn't there yet. It's like, yeah, it works, but not for 1.7 million people at the same time.
@@puffytheangel483 No GPUs? No excuses. Price is the mechanism that reconciles supply and demand. Release Sora, make it expensive, and let users judge whether it is worth that price.
It's because of the US elections. Their CTO, Mira Murati has mentioned it before. The losing party would like to put the blame on a new technology and cause negative PR.
Hey Matt, @8:00 the correct pronunciation is Hai - Luo - A.I. (Sounds like "high" + "Lu" + "Awe" (from AWEsome) + A.I.) Hope it'll help for your future videos! Cheers! :)
I think the reason why most of them are in slowmo ist because they were mainly trained with stock footge, which for the most part is either some drone shots over landscapes or people doing stuff in slow motion.
Yeah, it appears this model has much more movies and series in it's training data and it shows. I'm sure USA based companies could achieve the same level of quality but are held back by copyright laws, public scrutiny, etc... All of this is just more evidence of the economic clusterfuck that we are full throttle heading into.
i will have to say i was playing with the mini max.. but i did not need a phone number to sign in.. for some reason.. and was having a play and i must admit i am blown away with this one.. though waiting for the renders takes time.. but its well worth it its also free.. but with careful texts prompts you can do some awesome stuff.. i was in the middle of making a video on this page.. then i seen yours posted.. i will still finish the video.. and spread the word more.. :)
People are so funny talking about I have to wait for it to render I mean do we realize like what this means like even if we had to wait a week for it to render it would still be worth it and awesome!
Sora was ahead of everyone, and then the west's anti-AI forces got in the way and OAI got spooked by potential lawsuits coming from training on UA-cam videos. They probably went back to the drawing board to re-train the model not using UA-cam videos and have been struggling - which is why later Sora generations don't look as good. Chinese AI video companies, on the other hand, just don't care. This was predicted a year+ ago: that anti-AI movements in the US and EU would literally only cause China and other countries to advance beyond our stuff, not stop it.
This is by far the best model for abstract and artistically styled/fantastical prompts, both kling and runway turn anything without much training data into an amorphous blob with explosions and smoke pretty quickly, this minimax model is the best for highly imaginative prompts which is the fun of AI vdeo generators anyways
Why does anyone still mention Sora today? An AI that was announced over half a year ago (which in AI times means a century) and delivered nothing for public use. Meanwhile there are dozens of video generators coming close to these few cherrypicked demos published back in those ancient times. If they don't want to miss the train they really should hurry. Because the DALLE times are over long ago.
Does anyone use Dall-e any more? It must be way behind the times now. As for Sora, I doubt that will ever be released. It was used to reel in more paying users of GPT-4, that I guess was its real purpose.
You hear comments like your's every single time and then one month later OpenAI releases something and is the biggest player again. It's always the same so far.
@@vomm when did they last release something. I mean, actually release something for all paying users? Unless people have shares in the company or work for them, I don't understand why people are so precious about companies.
@@cbnewham5633 4o-mini is huge for developers utilizing the API. Having a multimodal model so inexpensive is amazing. At work I absolutely abuse the 4o-mini API, now that it's so cheap. I left open-interpreter running during lunch yesterday working on some projects, and in 45m of constant work I had it create the framework for a Microsoft Teams Bot to assist my team, write some python code to analyze machine downtime, and create some data visualization graphs for a powerpoint, and all that costed me 40 cents. That's a pretty monumental shift for those that have the skills to take advantage of it. interpreter --model gpt-4o-mini --os --v -y is one of the most powerful terminal commands I've ever used. --os gives it complete access to your operating system (only use it in VMs), and --v lets it see what it's doing, and -y is saying yes to everything it asks you for permission to do, making it completely automated. It's literally like my own personal Jr. Dev assistant, that is on-call 24/7 and works for pennies lol.
it's able to do infinitely long, it would just cost a lot of money and wouldn't be consistent, so they have these artificial limits if I understand correctly and with each iteration and improvement they are likely gonna make it more stable and longer
The tesseract changing shape when it rotates actually makes sense from our 3D perspective. I don't know if it was intentional or the AI was having some issues keeping the object shape consistent, but it was very cool nonetheless.
The Tom Cruise one made me realize that a time could come when true actors don't exist as much as today. People would just generate virtual actors. One very popular job that would almost disappear and slow down.
17:08 this scene is perfect for a movie about zombies or even the walking dead, it just fits the plot perfectly and you dont even have to change it or explain it, you just put a character inside a house looking trow his door and the next scene is this one and thats it, it fits.
@mattvidpro the Reason of Random slow-motion in video is set to save compute/times, as its generate less movements/frame per sec..... we also observe this phenomenon in most video generators as well. Easy to configure, same goes with setting the failures rate higher to cost you more token ects.
See I thought the minimax 4D cube generation was a lot more interesting, it looked like a physical 4D object that rotated in 4D space as he moved it around whereas runway was a movie SFX "ooh fancy hypercube thingy". Could also be down to their interpretations of the prompt though
26:00 - A video generator that could do really great never-seen-before video game footage based on prompts would need to be trained on a million years of gameplay footage of all kinds, from retro games to modern FPSs and platformers and other genres; but that should be some of the easiest data to collect en masse.
Impressive breakdown of the AI video generator! The quality and consistency are remarkable, especially considering how far this tech has come in such a short time. Exciting to see where this will lead next! Keep up the great work.
Super cool. The quality is getting impressive but this stuff will stay largely useless until there’s WAY more control of camera, character consistency etc. What we really need is video in painting and most importantly VIDEO TO VIDEO.
it's pretty much a demo of a prototype, they don't even have paid service yet if I understand.. and this is definitely gonna cost something, you have to look at it as a preview of what is gonna come and if it gets a bit better, you won't need anything special.. you will just type more text with all the scenes you want is my guess, there doesn't have to be that much more to the interface at first
@@S.K._CGI y'all know cameras are Tech too right? I don't know why filmmakers in particular want to get stuck in a moment. When digital cameras came out they had the exact same reaction they keep focusing on where it is and not where it's going. They don't see the trends. Obviously at this current rate will be able to make full-blown Hollywood films in probably 2 years or less from your computer. In fact I think AI is going to change our whole definition of media. It's a lot of our media definitions are from limitations of technology. But it's been going on so long you think it's like some kind of like set in stone reality like books or movies or songs or albums or podcast. Actually the thing I think from what I've observed it's not the technology that's going to be the issue is going to be the slow response of humans to adapt to it. Because even now I meet people who have never heard of AI and then I meet a few people that have heard of it but they're not using it and then even smaller group have actually tried it out and then even smaller group of that have actually tried to do anything meaningful with it. I'm really amazed at this phenomenon which I think happens whenever there's a new technological revolution people just sort of go into denial mode and ignore it.
@@adancewithgod Time and again you AI bros keep bringing up these absolutely terrible examples like the cameras or computer graphics and similar inventions in history which are nowhere near the same type of tech that is generative AI, the thing is that those examples didn't fundamentally change everything that you can do in the creative or production field of all sorts, yes the camera was a new tool that can capture reality, which got better and better over decades, but even today can the most expensive and newest camera imitate and generate everything both visually and audibly? Of course not, neither did computers or things like synthesizers, all those inventions had specific roles and purposes to make a lot of things possible and easier yes, however in case of gen AI it's a whole different case, while all those examples you like to mention were actual tools that had their own purpose, I can't simply perceive Gen AI as a tool because it's more like an actual factory that can automatically without any human intervention produce and industrialize digitally (at the moment) anything that is art and creativity as well as information and many other things, so you tell me, do you honestly think that just because an AI can produce Hollywood level movies for you at home it can make you a great story teller or a big name director? You don't direct anything and didn't tell your story because it was generated without 0 of your help, same way anyone else "can"
Best overall generator imo and the time to generate isn't even that bad. Luma could take hours for a generation when it first dropped, ~5 minutes is pretty manageable.
I'm so glad I wasn't eating during this video lmaooo. This video generator is insane tho. I'm sure OpenAI is still improving Sora and it's reached even greater heights than when they last showed it off, but they might wanna hurry up regardless.
Mini Max understanding of complex prompting Is up there with Dall-E. It's able to follow complex prompts fairly well compare to other video models. It's definitely up there with Dall E. If they add Image to video then it will stay goated for a while. And the fact they use license characters from video games is Insane. But as long as they keep it private, It shouldn't be a problem with the copy right laws.
12:40 - Notice how quick her eye movements and blinking are compared to the slowmo effect of the water? it's interesting, this implies that the AI model does it to some things (in this case the sea surface) since her eyes suggest otherwise.
...naw, I actually forgot that it was an AI video with the Robot One. That looks real enough, I'd believe it. I think it was like, high quality VFX or something.
Upon further most scholarly research I can confirm mini max is indeed minmaxing the AI scene with minimal effort and maximal results, best coherence of any model
You need to go and watch Minority Report right away Matt! Not recognising Tom Cruise in that scene with the floating blue screens is sacraligious! Minority report is an absolute classic!
MiniMax is awesome. There you can see that it's much more important an AI is affordable or even free. I already have made an awesome project for myself with MiniMax which would be impossible with Runway, it would cost me more than one hundret dollars or so. It's so important to be able to rerun prompts without paying a dollar or what every time. I really hope it stays free for some more time.
What I like about this generator is that videos don't look like slow motion. That means it have to generate much more frames comparing to other generators..
Is it possible to make my grandpa's old photo move? like, I wanna see my grandpa again. If I could make a video of when he was younger, my mother would be so happy. They never took video of my grandpa cuz he was so shy and refused. We got only photos of him.
@@mari-2-2-2what a sweet thing to do. I concur - Kling or RW. I've found my best results were from Kling, but it's super easy to try both without spending any money. 😊
We need something of this quality in the open source space, super high quality video generators shouldn't only be limited to the companies that make them & keep them locked behind their walls. But that's pretty much all we've seen so far with this stuff.
I wonder if Sora is still just dream because of the Interview they did saying that they can't say if UA-cam content was part of the training set, and that caused google to do some behind the scenes audit and force Open AI to tell them, and pay for the costs of scrapping youtube. Or perhaps they keep getting that "Alignment" error when they try to push it out. I think it is a closed door review because of the training data
Imagine if AI would set up a project for e.g. unreal engine 5 with textures, models, and let you render it, so the concistancy would be 100% but the idea, models, etc would be serverd up for you.
WTF! Those guys with the tesseract do look like the same person! I thought the same, before you mentioned it. There must be a real youtuber out there related to tesseracts!
12:25 Kinda obvious why many vids appear or are in a slower motion when you think about it. A vid with a basic prompt (less instructions) tends to be generated in slower motion to save in compute power whilst still satisfying the 'creator'. The total length of the vid can then also be longer in length. Try it yourself, take a slow motion vid and edit/add much more detail to the prompt - voila!
I bet it is Image to Video. It would probably use a Text to Image, and then maybe do another image, and maybe do some lineart to get composition, then have the lineart images interpolated, and maybe even take one of the middle interpolated images to create an in between image which is cleaned up for longer videos. Then it would use the first image for character consistency and style, and then generates the video, then it reduces all the flicker. It could also be an initial image, then a second image with character consistency, and style from the first, and then use something like tooncrafter to interpolate the images. Something like that. I can't see it just training off videos, and then just generating like Diffusion over the entire video.
that one with the hologram screen is literally tom cruise from minority report film. then the one right after that is tom cruise from the film oblivion.
This is obly my opinion, but I expect with the mermaid clip and other complex physics clips the processing required might be much higher. So, for money, time or resource's sake they produce shorter clips and slow them down to meet the length requested. Just a though, definitely have not looked into this aspect.
Sora becomes the "IMPOSSIBLE BURGER" (vegan) of the AI text to video models, you can wait forever for it to get released near you or instead simply buy a 3rd party other brand of vegan burger that is on par or at least close enough to it.
HUGE Thanks to LTX Studio for Sponsoring today's video! Check em out! bit.ly/LTXmattvidpro
Bro, this is nothing big but I was the person who generated the video at 16:32 Lol! While watching your video and that one came up it kinda had me shook because I generated it some days ago and never shared it publicly anywhere! Came back to the website and realized they picked that video to feature it on the 'Featured' tab Lol.
👋 hi
@@NCLDMRdude this was literally one of my fav gens I saw from the model all day
As good as these are, I'm waiting until computing power becomes affordable and good enough for average folks like me to download these types of generators to create uncensored content on our own. I do not want to pay for access to a "website" that decides what I am allowed to create. Makes no sense.... If I'm paying money, I want to use the product as I please.
On a side note, I'm glad you put "sponsored" in your summary instead of pretending it wasn't sponsored. I still watched the entire video.
Nothing irks viewers more than content creators pretending the video they are showing is NOT something sponsored..... especially when the content comes across as sponsored.
Damn, openai really missed the boat
Those who don't learn from the DALL·E 2 / Stable Diffusion 1.4 debacle are doomed to repeat it.
A response was here 🚩. Deleted by UA-cam. 🤐
They either don't have the tech.
Or its absolutely beyond anything out right now and they're not worried.
Yeah, for all we know, it makes this look like caveman stuff. Maybe the processing power to allow people by the masses to use it simply isn't there yet. It's like, yeah, it works, but not for 1.7 million people at the same time.
@@puffytheangel483 No GPUs? No excuses. Price is the mechanism that reconciles supply and demand. Release Sora, make it expensive, and let users judge whether it is worth that price.
Sora has become a greek myth. A object that has been talked about through the ages. Sora is the ark of the covenant.
Yo hahaha 🤣
if by ages you mean 6 months
@@esimpson2751everyone knows the universe began 3 months ago.
“6 months” is a myth, doesn’t exist.
It's because of the US elections. Their CTO, Mira Murati has mentioned it before. The losing party would like to put the blame on a new technology and cause negative PR.
Yeah, that's how impatient these man babies are. @@esimpson2751
AI knows Tom Cruise would not need a helmet in space. 😂
lol for real
He does his own stunts lol
Insert joke about Cruise really being an alien because of his beliefs.
Because his head is that inflated?
@@Elwaves2925he's short enough to be a little green man.
That Tom Cruise looked real. The effects, not so much. We are really getting close to an explosion of content.
Think of where we were a year ago... now picture a year from now 🎉🎉
Dissagree. People are going to get sick of this fakery.🤮
Hollywood's toast
@@ThomasJDavis Dissagree. This is a passing fad.
We don't need an explosion of content. We need curation of quality.
BRO WHAT?!?!? 🔥
Hey Matt, @8:00 the correct pronunciation is Hai - Luo - A.I. (Sounds like "high" + "Lu" + "Awe" (from AWEsome) + A.I.) Hope it'll help for your future videos! Cheers! :)
Thanks for the info!
@@MattVidPro Anytime! 😊👍🏻 Thanks for always keeping us up to date with AI news!
did you not realize that was Tom Cruise at 2:00? lol
no lol
Yeah that was weird AF lmao. Imagine he has this massive knowledge gap like he's never heard of Tom Cruise or "Minority Report"
At 2:38 the left guy too was Tom Cruise! lol
@@llmtime2178 I’ve heard of both and ive def seen tom cruise movies but tbh not a huge celeb guy 😅😅
@@MattVidPro based, if tom cruise didnt pay you why give him free exposure
I think the reason why most of them are in slowmo ist because they were mainly trained with stock footge, which for the most part is either some drone shots over landscapes or people doing stuff in slow motion.
@@markusps3248 true. It’s definitely the data not the architecture
Yeah, it appears this model has much more movies and series in it's training data and it shows. I'm sure USA based companies could achieve the same level of quality but are held back by copyright laws, public scrutiny, etc... All of this is just more evidence of the economic clusterfuck that we are full throttle heading into.
That also explains why it only happens with photorealistic footage and not with the 2D/3D animated videos.
I think it is from the training data too, but also think it's because it keeps it more stable by not having to do wild motions between frames.
Man I’m Never going to live this whole Tom Cruise thing down.
You need to watch Minority Report to atone for these sins
@@saikanonojutsu With that quality we may be able to generate a "Minority Report 2.0, the Re-Reportening". :)
Man, that Zombie walking up to the door was GREAT
i will have to say i was playing with the mini max.. but i did not need a phone number to sign in.. for some reason.. and was having a play and i must admit i am blown away with this one.. though waiting for the renders takes time.. but its well worth it its also free.. but with careful texts prompts you can do some awesome stuff.. i was in the middle of making a video on this page.. then i seen yours posted.. i will still finish the video.. and spread the word more.. :)
People are so funny talking about I have to wait for it to render I mean do we realize like what this means like even if we had to wait a week for it to render it would still be worth it and awesome!
Sora was ahead of everyone, and then the west's anti-AI forces got in the way and OAI got spooked by potential lawsuits coming from training on UA-cam videos. They probably went back to the drawing board to re-train the model not using UA-cam videos and have been struggling - which is why later Sora generations don't look as good. Chinese AI video companies, on the other hand, just don't care. This was predicted a year+ ago: that anti-AI movements in the US and EU would literally only cause China and other countries to advance beyond our stuff, not stop it.
This is by far the best model for abstract and artistically styled/fantastical prompts, both kling and runway turn anything without much training data into an amorphous blob with explosions and smoke pretty quickly, this minimax model is the best for highly imaginative prompts which is the fun of AI vdeo generators anyways
Yea AND free AND solo,but Is the king
I thought the Muppet was talking in Kermit voice, then realized it was your voice over. Bro u missed your calling. Please be the next Kermit.
😂😂 bro I swear I sound nothing like kermit
Why does anyone still mention Sora today? An AI that was announced over half a year ago (which in AI times means a century) and delivered nothing for public use. Meanwhile there are dozens of video generators coming close to these few cherrypicked demos published back in those ancient times. If they don't want to miss the train they really should hurry. Because the DALLE times are over long ago.
Does anyone use Dall-e any more? It must be way behind the times now. As for Sora, I doubt that will ever be released. It was used to reel in more paying users of GPT-4, that I guess was its real purpose.
You hear comments like your's every single time and then one month later OpenAI releases something and is the biggest player again. It's always the same so far.
@@vomm when did they last release something. I mean, actually release something for all paying users? Unless people have shares in the company or work for them, I don't understand why people are so precious about companies.
@@cbnewham5633 4o-mini is huge for developers utilizing the API. Having a multimodal model so inexpensive is amazing.
At work I absolutely abuse the 4o-mini API, now that it's so cheap. I left open-interpreter running during lunch yesterday working on some projects, and in 45m of constant work I had it create the framework for a Microsoft Teams Bot to assist my team, write some python code to analyze machine downtime, and create some data visualization graphs for a powerpoint, and all that costed me 40 cents.
That's a pretty monumental shift for those that have the skills to take advantage of it.
interpreter --model gpt-4o-mini --os --v -y is one of the most powerful terminal commands I've ever used. --os gives it complete access to your operating system (only use it in VMs), and --v lets it see what it's doing, and -y is saying yes to everything it asks you for permission to do, making it completely automated.
It's literally like my own personal Jr. Dev assistant, that is on-call 24/7 and works for pennies lol.
Was for pumping stick prices. Meanwhile they lobby for regulatory capture and romance hollywood executives for exclusive use.
So I believe in 2026 we will have 4K videos with no mistakes and very fast cool 👌
I think 2025
@@Aldrazyeah or maybe 2024 nobember o devember AND paíd fast maybe 2 Month un october
Hey matt! idk if you're aware of this but there is an ai ad going around with you're ai voice. It's on a channel called
"Notd Network"
the moment AI video will be able to do 5 min or 10 min it will be insane
it's able to do infinitely long, it would just cost a lot of money and wouldn't be consistent, so they have these artificial limits if I understand correctly and with each iteration and improvement they are likely gonna make it more stable and longer
@@Aldraz i was also reffering to 5 to 10 min of quality and consistent video
The tesseract changing shape when it rotates actually makes sense from our 3D perspective. I don't know if it was intentional or the AI was having some issues keeping the object shape consistent, but it was very cool nonetheless.
21:08 "Man forced to eat 100 of the worlds most sour lemons without taking a break".
26:34 Mites and Isopods
The Tom Cruise one made me realize that a time could come when true actors don't exist as much as today. People would just generate virtual actors. One very popular job that would almost disappear and slow down.
7:03 I see that Pink Floyd profile picture. You have good taste in music.
17:08 this scene is perfect for a movie about zombies or even the walking dead, it just fits the plot perfectly and you dont even have to change it or explain it, you just put a character inside a house looking trow his door and the next scene is this one and thats it, it fits.
@mattvidpro the Reason of Random slow-motion in video is set to save compute/times, as its generate less movements/frame per sec..... we also observe this phenomenon in most video generators as well. Easy to configure, same goes with setting the failures rate higher to cost you more token ects.
See I thought the minimax 4D cube generation was a lot more interesting, it looked like a physical 4D object that rotated in 4D space as he moved it around whereas runway was a movie SFX "ooh fancy hypercube thingy". Could also be down to their interpretations of the prompt though
29:09 Guy in the boat = Open AI, monster = third party content
26:00 - A video generator that could do really great never-seen-before video game footage based on prompts would need to be trained on a million years of gameplay footage of all kinds, from retro games to modern FPSs and platformers and other genres; but that should be some of the easiest data to collect en masse.
Thanks Matt for this drinking game! Every time you say „Darn“ we drink! 😅
So "shocking" it's "creepy"! LOL !
Don’t turn my speech into a drinking game because you might not make it out alive
15:20 - That's an Ultraman, not a monster lol
MattVidPro AI, can't wait to see what you'll create next
So it is arriving the moment we can say, SORA is officially... Over
Impressive breakdown of the AI video generator! The quality and consistency are remarkable, especially considering how far this tech has come in such a short time. Exciting to see where this will lead next! Keep up the great work.
The robots are so cool with that new site. Sci Fi Shorts for sure. Giant ape movie short on Luma is still extending. I love that extending option
17:45 Some random grandma will believe this
To be fair, I would have thought that the geek UA-camr guy showing the new iPhone video was real if I've seen it on social media.
Or 99% of all TikTok users
Super cool. The quality is getting impressive but this stuff will stay largely useless until there’s WAY more control of camera, character consistency etc. What we really need is video in painting and most importantly VIDEO TO VIDEO.
it's pretty much a demo of a prototype, they don't even have paid service yet if I understand.. and this is definitely gonna cost something, you have to look at it as a preview of what is gonna come and if it gets a bit better, you won't need anything special.. you will just type more text with all the scenes you want is my guess, there doesn't have to be that much more to the interface at first
Or you can just grab a camera and not make AI slop instead
@@S.K._CGI y'all know cameras are Tech too right? I don't know why filmmakers in particular want to get stuck in a moment. When digital cameras came out they had the exact same reaction they keep focusing on where it is and not where it's going. They don't see the trends. Obviously at this current rate will be able to make full-blown Hollywood films in probably 2 years or less from your computer. In fact I think AI is going to change our whole definition of media. It's a lot of our media definitions are from limitations of technology. But it's been going on so long you think it's like some kind of like set in stone reality like books or movies or songs or albums or podcast. Actually the thing I think from what I've observed it's not the technology that's going to be the issue is going to be the slow response of humans to adapt to it. Because even now I meet people who have never heard of AI and then I meet a few people that have heard of it but they're not using it and then even smaller group have actually tried it out and then even smaller group of that have actually tried to do anything meaningful with it. I'm really amazed at this phenomenon which I think happens whenever there's a new technological revolution people just sort of go into denial mode and ignore it.
@@adancewithgod Time and again you AI bros keep bringing up these absolutely terrible examples like the cameras or computer graphics and similar inventions in history which are nowhere near the same type of tech that is generative AI, the thing is that those examples didn't fundamentally change everything that you can do in the creative or production field of all sorts, yes the camera was a new tool that can capture reality, which got better and better over decades, but even today can the most expensive and newest camera imitate and generate everything both visually and audibly? Of course not, neither did computers or things like synthesizers, all those inventions had specific roles and purposes to make a lot of things possible and easier yes, however in case of gen AI it's a whole different case, while all those examples you like to mention were actual tools that had their own purpose, I can't simply perceive Gen AI as a tool because it's more like an actual factory that can automatically without any human intervention produce and industrialize digitally (at the moment) anything that is art and creativity as well as information and many other things, so you tell me, do you honestly think that just because an AI can produce Hollywood level movies for you at home it can make you a great story teller or a big name director? You don't direct anything and didn't tell your story because it was generated without 0 of your help, same way anyone else "can"
INSANE quality, makes you feel how close we are to fully made AI movies
Best overall generator imo and the time to generate isn't even that bad. Luma could take hours for a generation when it first dropped, ~5 minutes is pretty manageable.
I'm so glad I wasn't eating during this video lmaooo. This video generator is insane tho. I'm sure OpenAI is still improving Sora and it's reached even greater heights than when they last showed it off, but they might wanna hurry up regardless.
Looks like we're slowly getting there with the AI video stuff. Impressive indeed.
Mini Max understanding of complex prompting Is up there with Dall-E. It's able to follow complex prompts fairly well compare to other video models. It's definitely up there with Dall E. If they add Image to video then it will stay goated for a while. And the fact they use license characters from video games is Insane. But as long as they keep it private, It shouldn't be a problem with the copy right laws.
12:40 - Notice how quick her eye movements and blinking are compared to the slowmo effect of the water? it's interesting, this implies that the AI model does it to some things (in this case the sea surface) since her eyes suggest otherwise.
2:37 Dude! Tom Cruise in a Mass Effect movie! Of course why didn't I think of that!
...naw, I actually forgot that it was an AI video with the Robot One. That looks real enough, I'd believe it. I think it was like, high quality VFX or something.
always keeping us up to date with the latest and greatest. keep it up matt
Upon further most scholarly research I can confirm mini max is indeed minmaxing the AI scene with minimal effort and maximal results, best coherence of any model
well looks like i dont have to wait for runway gen 3 to be fully released! 🎉
The guy? You mean the Tom Cruise look-alike.
"A TV show from the 19th century"???? You young people 🙄🙂
I meant a TV show that takes place in the 19th century, libe bridgerton or whatever. But yeah I did not notice tom cruise 😂
@@MattVidPro phew! People mistakenly thinking TV started in the 19th century is not due until 2084. 😄
You need to go and watch Minority Report right away Matt! Not recognising Tom Cruise in that scene with the floating blue screens is sacraligious! Minority report is an absolute classic!
Is it that good? I gotta add it to my watch list
Great video! ...but dude, warnings for an unsavory clips should come BEFORE the clip shows 😂
You are just going to ignore that 2:10 is Tom Cruise?
VEO dream screen coming to UA-cam shorts soon.
The one with the iPhone is scary, I really would have a hard time guessing it was AI if I just saw that by itself
MiniMax is awesome. There you can see that it's much more important an AI is affordable or even free. I already have made an awesome project for myself with MiniMax which would be impossible with Runway, it would cost me more than one hundret dollars or so. It's so important to be able to rerun prompts without paying a dollar or what every time. I really hope it stays free for some more time.
pretty good .. this will be on device by next year
What I like about this generator is that videos don't look like slow motion. That means it have to generate much more frames comparing to other generators..
The fact that the AI knew about a teseract is really scary!
Is it possible to make my grandpa's old photo move? like, I wanna see my grandpa again. If I could make a video of when he was younger, my mother would be so happy. They never took video of my grandpa cuz he was so shy and refused. We got only photos of him.
Try this in either runway gen 3 or kling AI!
OMG! Thank you for replying! I will try my best to do good! You're gonna make all my family cry. Thank you so much!
@@mari-2-2-2what a sweet thing to do. I concur - Kling or RW. I've found my best results were from Kling, but it's super easy to try both without spending any money. 😊
A wholesome comment and a helpful reply by the creator and it's appreciation by the op.
Just good stuff.
@@FriendlyVimanam I love this community 😁
We need something of this quality in the open source space, super high quality video generators shouldn't only be limited to the companies that make them & keep them locked behind their walls. But that's pretty much all we've seen so far with this stuff.
Compute costs. It's a miracle we have any free large AI models at all.
If you're willing to put up the funding I'll be very appreciative. Maybe wait until after the election, though.
I wonder if Sora is still just dream because of the Interview they did saying that they can't say if UA-cam content was part of the training set, and that caused google to do some behind the scenes audit and force Open AI to tell them, and pay for the costs of scrapping youtube. Or perhaps they keep getting that "Alignment" error when they try to push it out. I think it is a closed door review because of the training data
Why can't I use Minimax on my phone?
Imagine if AI would set up a project for e.g. unreal engine 5 with textures, models, and let you render it, so the concistancy would be 100% but the idea, models, etc would be serverd up for you.
now we need character consistancy between generations and multiple characters movement at the screen
Do you remember that when the giant squid was finally filmed there was already a larger species of squid? hope it's not the same with Sora.
You can set VLC so it doesn't display the video title every time it plays
WTF! Those guys with the tesseract do look like the same person! I thought the same, before you mentioned it. There must be a real youtuber out there related to tesseracts!
Write the prompt to mini max in Chinese. He will follow everything you say much better.
Of all the AI video generators I've tested so far, this is easily the best.
12:25 Kinda obvious why many vids appear or are in a slower motion when you think about it. A vid with a basic prompt (less instructions) tends to be generated in slower motion to save in compute power whilst still satisfying the 'creator'. The total length of the vid can then also be longer in length. Try it yourself, take a slow motion vid and edit/add much more detail to the prompt - voila!
Hey Matt! It’s my birthday today!
Happy birthday.
@@cbnewham5633 thanks!
I bet it is Image to Video. It would probably use a Text to Image, and then maybe do another image, and maybe do some lineart to get composition, then have the lineart images interpolated, and maybe even take one of the middle interpolated images to create an in between image which is cleaned up for longer videos. Then it would use the first image for character consistency and style, and then generates the video, then it reduces all the flicker. It could also be an initial image, then a second image with character consistency, and style from the first, and then use something like tooncrafter to interpolate the images. Something like that. I can't see it just training off videos, and then just generating like Diffusion over the entire video.
Circa 2:38 Even more concerning is the fact you didn't seem to know the "characters" in the two early clips were played by Tom Cruise. ;)
People are already creating movie shorts using Runway and Kling. Have you seen the one titled "Meowstromo"? The film Alien, but with cats.
that one with the hologram screen is literally tom cruise from minority report film. then the one right after that is tom cruise from the film oblivion.
Damn that,s incredible.
The Tom Cruse clip was probably trying to recrate the computer from Minority Report.
so how do u get consistent results to create a short movie? and how do you remove the watermark?
8:10 I think the phrase you want is "no holds barred..."
Ur right! My b.
I am not giving the Chinese my phone number. lol
You never ordered chinese food ? : P
@@mediastreamview9528 nope, not via phone
19th century man looks like Chris Pratt
Hey Matt!!
How'd you get it in English??
He's running it in Google Chrome which has translation features on foreign pages.
Matt, you didn't recognize Tom Cruise? Lol, "he".
That clip is mimicking a scene right out of one of his movies called "Minority Report"
I've tried it a few days ago. It's dope
That burger eating lol.
Anything with those weird squiggly lines should be viewed skeptically
This is obly my opinion, but I expect with the mermaid clip and other complex physics clips the processing required might be much higher. So, for money, time or resource's sake they produce shorter clips and slow them down to meet the length requested. Just a though, definitely have not looked into this aspect.
It's incredible
I heard about this a few days ago but haven't been able to get the page to load.
*2024* is the year of AI videos and music.
*2025* will be the year of AI video games… and that’s when things are gonna get crazy.
Sora becomes the "IMPOSSIBLE BURGER" (vegan) of the AI text to video models, you can wait forever for it to get released near you or instead simply buy a 3rd party other brand of vegan burger that is on par or at least close enough to it.
MY GOSH DUDE, MY PUBES ARE MORE CLEAN THAN YOU HAIR.
Love new intro
They waited so long to release Sora that their product was outdated before their release, despite it being way ahead of its time
This model is the best at nonhuman fantasy characters Ive seen so far
More like a Voxel-Art style kitten walking 😃
"I won't show you my phone number on screen ..... but I'll give it to China."
I am not spending a cent on these until I see which one is the actual best of the best
Interdimensional cable box soon.
Try more details next time, since I've noticed that commonality.