What I love, is that you keep testing the same prompts of previous video, so we can have a clearer idea of what each generator can do, thanks man! I personally use Kling for very satisfying results, but sometime I'll give a try at minimax and genmo.
I've tested minimax, for 100 video generated, i only got 10 correct results. Subs fee is not worth it, around10 correct results for one month - (130 gens = 10/month in my case). I can't afford the unlimited one, its so expensive for a hobby.
I was ahead for once it seems ! Great to see people discovering Mochi1, we have had a blast with it on the research front ! Don't forget Mochi-Edit which is basically "Runways Act-One at home" Great Video !
I can’t use anything that doesn’t have ‘Image to Video’ though. I don’t find ‘Text to Video’ all that useful. So, I’m hoping these tools will offer ‘Image to Video’ soon.
Genmo's giant sea creature attacking the ship was very cool, BUT there were no splashes when the sea creature came out of the water like the huge splash that happens randomly at the beginning.
What is holding everyone to make films is not the time (since u can link 5-10 seconds clip by using last frame to reprompt the next 5-10 seconds) or the character reference since you can easily create a Lora + controlNets and will get you very consistent characters in different poses. The big issue everyone have is creating MULTIPLE consistent charaters in the same shot. Almost impossible to do without the use of external editing/face swapping.
@gnoel5722 well consistent characters doesn't exist I have scene the Control Net Scam for over two years. The biggest problem with AI artwork is that it hasn't been trained on the rules of art Anatomy & shape theory just the final illustration. Also we need a mixture of Experts on Art Generation, Background, Characters, environments, Motion, color, we need a Mixture of Experts for it. Also far more than 8B - 12B models. Films their are tricks to generate one character a scene a time than merge it but 5 Seconds isn't enough
There are no 10 seconds to 1 minute clips in films. It's usually half a second to 2 seconds. Longest are around for seconds. Extremely rarely longer than that
Its interesting to see open-source video generation tools! Thank you for the video, I really enjoy your content & your funny comments :) Regarding the boy at 25:26, I think he's either shocked or emotionless, not scared well I think Mochi needs more improvement in quality and facial expressions.
Thanks for the video!id like to ask you what generator do you recommend to do a full sequence of shots with consistent character, without using image2video. Is there any model that can take a photo reference of the likeness (face and wardrobe) of my character and keep the consistency throughout several shots?
I gotta say I would love these projects provide an easy to install options for these things. I get that it's not their focus as it's often stuff like research projects made public but still.
Nope I made a tree within a hour with AI. It learned and made it perfect very fast. I had nvidia 1650. Latest news I always go forward in their technology but best AI is the one u save in file explorer ;).
Ummm.. when I checked my download for machi 1 (Genmo) from their website it did 960p, 30fps, and 4,000kbps. So maybe it has been updated on their website, but not on the open source model. They could have just adjusted size, but it looks about 960p to me.
Are there video generators that are good for *adding* new things to existing video? Making a video of a princess running, and then uploading that to an editor *adding* a dragon following her in another prompt, might be an easier way to incrementally create the desired result.
Bro, I am not sure if you can see this. Wanted to say thanks for all in one AI information. After watching your video I will be starting to use AI in my videos ❤. Thanks again
The first Will Smith didn't throw the fork down. A second fork was always present in the spaghetti and the fork he is actually holding just disappears.
okay so i am getting confused. like everyday there is a new AI update, firstly i am struggling with the clickbaits right now and i am not up to date: so, who is currently the leading LLM model in overall, problem solving, picture creation. and who is the leading free AI video generator?
these are free but with limited generations LLM- GPT-4o and Claude Sonnet 3.5 (new) Video- Minimax and Kling AI image creation- Ideogram 2.0 and Flux 1.1
I get it.. for a free stuff. its amazing. (Remember, Midjourney was once free) Yet, I'd say its littler bit bold to say ANY of these beats Gen3 of Runway. Going by the fact that the videos being showcased are always the best. Right from the beginning.. the monk walking like floor is electrocuting LOL or that hat swimmer girl's "broken leg".. The pouring of liquid and the thing on the table is depleting, and so many other things . Most probably not up to date with Runway Gen3 .. The only thing one can say is that Runway's customer service is a CRIME.. ..besides, no image to video or video to video.. naaah.. Ant talk of that, MidJourney is working on their video model now that can generate artistic stuff. THAT would be a game changer for artists
Like many of these, but excluding the amazing Kling, seems very good for cutesey animals. Not so good for multiple people in a near shot, that is not just a person's head, or distant view. I guess it's down to training.
The only issue with these new open source models is as they get getter they also demand a lot more power to run to run locally. When you typically need to do many generations to get a decent shot then waiting an hour for each take is not an option unless you have a lot of time on your hands and are very patient. Affordable Consumer AI hardware at this stage is still lagging behind.
Unfortunately, it says it requires 4 Nvidia H100 GPUs to run it. That's some pretty immense hardware requirements, especially for just 480p videos. Man, I wonder how long it'll be before either consumer hardware or video generators become a thing that normal people can feasibly run.
interesting that the princess running away from the dragon doesn't work. There should be so many 'person running away from something'' reference videos to learn from
6:05 "and kling.. everything is more fluid" oh yea, and casually not mentioning - in the Kling version it's not even a Unicon, which takes a 1/3 of the prompt's essence
Thanks to our sponsor Abacus AI. Try their new ChatLLM platform here: chatllm.abacus.ai/?token=aisearch
Still perfer Runaway Gen 3 Video to Video
Yay. Local video. Awesome!
What I love, is that you keep testing the same prompts of previous video, so we can have a clearer idea of what each generator can do, thanks man! I personally use Kling for very satisfying results, but sometime I'll give a try at minimax and genmo.
thanks for sharing!
I've tested minimax, for 100 video generated, i only got 10 correct results. Subs fee is not worth it, around10 correct results for one month - (130 gens = 10/month in my case). I can't afford the unlimited one, its so expensive for a hobby.
It's true that I also find it easier to understand with repeated prompts. Thank you
8:38 Will Smith must be a professional magician making that fork disappear this smooth🔥
As smooth as Jada makes his dignity disappear.
I was ahead for once it seems ! Great to see people discovering Mochi1, we have had a blast with it on the research front !
Don't forget Mochi-Edit which is basically "Runways Act-One at home"
Great Video !
I can’t use anything that doesn’t have ‘Image to Video’ though. I don’t find ‘Text to Video’ all that useful. So, I’m hoping these tools will offer ‘Image to Video’ soon.
i'm sure this will be added soon
CogVideo has image to video.
I cant wait till we get image to video for this
Genmo's giant sea creature attacking the ship was very cool, BUT there were no splashes when the sea creature came out of the water like the huge splash that happens randomly at the beginning.
damn that ship wreck by genmo was so awesome
You're a legend. You have no idea how much footwork and research you've saved a lot of people, including my self.
Thanks!
he just searches google lol
The panda falling over is actually hilarious 😂
As usual, very good video. Thank you bro
Thanks for watching!
@@theAIsearch Thank you sir.
I love your teaching from Nigeria
Thanks
29:15 Genmo and Minimax is just Po from Kung Fu Panda. Especially the belt in the Minimax Video. Copyright discussion incoming 😅
lol
Man once they can do 10 seconds - 1 Minute & character reference for consistency it's everyone's making a film
What is holding everyone to make films is not the time (since u can link 5-10 seconds clip by using last frame to reprompt the next 5-10 seconds) or the character reference since you can easily create a Lora + controlNets and will get you very consistent characters in different poses. The big issue everyone have is creating MULTIPLE consistent charaters in the same shot. Almost impossible to do without the use of external editing/face swapping.
@gnoel5722 well consistent characters doesn't exist I have scene the Control Net Scam for over two years. The biggest problem with AI artwork is that it hasn't been trained on the rules of art Anatomy & shape theory just the final illustration.
Also we need a mixture of Experts on Art Generation, Background, Characters, environments, Motion, color, we need a Mixture of Experts for it. Also far more than 8B - 12B models.
Films their are tricks to generate one character a scene a time than merge it but 5 Seconds isn't enough
I'm guessing we can make movies by around 2033 but maybe and hopefully I'm wrong and I will definitely be using it
There are no 10 seconds to 1 minute clips in films. It's usually half a second to 2 seconds. Longest are around for seconds. Extremely rarely longer than that
Its interesting to see open-source video generation tools!
Thank you for the video, I really enjoy your content & your funny comments :)
Regarding the boy at 25:26, I think he's either shocked or emotionless, not scared
well I think Mochi needs more improvement in quality and facial expressions.
thanks for watching!
It will be something when it can be run on my own PC. Excellent!
I can't wait for video generation that can make text as well as image generation has
yeah it still doesnt do very well. i usually make the text on the image to start, then just animate the image
APP?@@bobicus
Text is the Ai-chilles heel 😏
6:45 left puppy's reaction is realistic 😮
Check out Kaye AI's video of cat music video, hers is far more realistic looking than those.
You just changed my entire month... subscribed! I wish i had this last month before I released my video. Thank you!
Thanks for the sub!
Thanks for the video!id like to ask you what generator do you recommend to do a full sequence of shots with consistent character, without using image2video. Is there any model that can take a photo reference of the likeness (face and wardrobe) of my character and keep the consistency throughout several shots?
Excellent analysis. I couldn’t find the site test link in the description.
are you looking for this www.genmo.ai/play
A programmer would be able to argue that no puppies is still a group of zero items.
If you're checking length, it's still 1 based, only index is 0 based
I gotta say I would love these projects provide an easy to install options for these things. I get that it's not their focus as it's often stuff like research projects made public but still.
this is why big tech proprietary slop keeps winning, way easier integration
the amount of VRAM needed to run this thing locally must be insane
apparently it requires at least 4 H100 GPUs
Yeah the minimum is 64gb and
That’s just the minimum
Nope I made a tree within a hour with AI. It learned and made it perfect very fast. I had nvidia 1650. Latest news I always go forward in their technology but best AI is the one u save in file explorer ;).
for mochi, some people have made it work with only 12g
Yes and no, runaway Ai Gen 3 of the Video to video is really amazing, that none of the other video ai company have not made yet
wow im so hyped for this stage of humanity, Interdimensional cable HERE WE COME
YEAAAAHHHH
I want to know more about the plumbus
@@BearFulmer I need to know more about the poop eating society
@drcluck9573 what yal ain't about that snake jazz
Would anyone know what type of specs one would want in a PC or laptop if starting to learn about this type of ai software/websites? Thanks
27:50 - Wow. The botton left is a movie without any work. Game-changer
the law suits are going to be interesting! actors and movie studios are not going to take this lying down 😅
7:50 That dog kneading the dough made me laugh 😂
13:38 a princess moonwalking in front of a disastrous monster is its own way of art
Dumb question: when you say it is open source does it mean it will available for we to use locally, for example, in the future?
If you have Comfyui you can run Mochi now.
Genmo is 2 generations a day.
Yea, I just tried to use it. My first two generations failed, then it says you have run out of credits for the day. 😂
Would be good if put the labels next to the generations to separate what output comes from what model.
20:57 Why does the generation by Minimax looks like Marie Schrader from Breaking Bad?
thanks, can it be used with consumer grade gpus?
Every day, the case for building a 4x3090 desktop gets more appealing.
Thank you great job. I like the first AI Video generator. I think they will get better as they learn. Is there any AI Video Generators with sound?
Ummm.. when I checked my download for machi 1 (Genmo) from their website it did 960p, 30fps, and 4,000kbps. So maybe it has been updated on their website, but not on the open source model. They could have just adjusted size, but it looks about 960p to me.
interesting. thanks for sharing
What it would take to finetune this model? Could you create a tutorial about finetuning video models?
Are there video generators that are good for *adding* new things to existing video?
Making a video of a princess running, and then uploading that to an editor *adding* a dragon following her in another prompt, might be an easier way to incrementally create the desired result.
waiting for 4bit quant or at least 6bit, or I might go ahead and do quantization myself
Will either of these be optimized for Apple's new M4? What are the minimum specs to run them?
I miss the good old days when DALLE was all the rage.
Bro, I am not sure if you can see this. Wanted to say thanks for all in one AI information. After watching your video I will be starting to use AI in my videos ❤. Thanks again
Another great one! Thank you!!!
you're welcome!
How does abacus compare to their competitor ninja chat?
Do you know what the minimum requirements are to use Allegro?, to use Mochi1 locally, it's completely crazy, the minimum is 80GB of GPU
what a time to be alive
The first Will Smith didn't throw the fork down. A second fork was always present in the spaghetti and the fork he is actually holding just disappears.
You didn't mention pyramid flow - also recently released
cant wait for ai to learn to code perfectly
Dude, at this point I have almost zero doubts we live in a simulation. Wich is a good thing. I think.maybe
How to use minimax? What is their website link
I love you because you give to us the free ones. ^_^
Thanks for sharing!
I'm really looking forward to AI VR video generation.
Is there any quantized model available?
With all these AI companies, what good is a 3 to 5-second video?
"Zombies in station" = Cash Jordan thumbnail !?!!
😅😂
9:26 Kling just straight up made Will smith Chinese. 😂
17:47 this looks like itadori and Fushiguro walking in the left side???
I was looking for this comment and yes I thought the same thing
okay so i am getting confused. like everyday there is a new AI update, firstly i am struggling with the clickbaits right now and i am not up to date: so, who is currently the leading LLM model in overall, problem solving, picture creation. and who is the leading free AI video generator?
these are free but with limited generations
LLM- GPT-4o and Claude Sonnet 3.5 (new)
Video- Minimax and Kling
AI image creation- Ideogram 2.0 and Flux 1.1
I look forward to the moment I will just have to enter a synopsis as a prompt and get a full length feature film in return.
cant login it says try later
i really want to see someone make a 1 hour and 30 min movie
Finnally abacus ai 😂🙌
Genmo not free, They tell me I can generate two videos a day unfortunately
Alegro on that nightmare fuel with the Will Smith spaghetti
I get it.. for a free stuff. its amazing. (Remember, Midjourney was once free) Yet, I'd say its littler bit bold to say ANY of these beats Gen3 of Runway. Going by the fact that the videos being showcased are always the best. Right from the beginning.. the monk walking like floor is electrocuting LOL or that hat swimmer girl's "broken leg".. The pouring of liquid and the thing on the table is depleting, and so many other things . Most probably not up to date with Runway Gen3 ..
The only thing one can say is that Runway's customer service is a CRIME.. ..besides, no image to video or video to video.. naaah..
Ant talk of that, MidJourney is working on their video model now that can generate artistic stuff. THAT would be a game changer for artists
Like many of these, but excluding the amazing Kling, seems very good for cutesey animals. Not so good for multiple people in a near shot, that is not just a person's head, or distant view. I guess it's down to training.
The only issue with these new open source models is as they get getter they also demand a lot more power to run to run locally. When you typically need to do many generations to get a decent shot then waiting an hour for each take is not an option unless you have a lot of time on your hands and are very patient. Affordable Consumer AI hardware at this stage is still lagging behind.
what is the discord to use those models?
The boy looks like he started the fire
what kind of NASA I mean Elon Musk supercomputer do I need at home?
It runs on the Casio scientific calculator
@@WongEthan-ge6pq😂😂😂
If this is what I'm thinking it is, I think 4 H100
Unfortunately, it says it requires 4 Nvidia H100 GPUs to run it. That's some pretty immense hardware requirements, especially for just 480p videos.
Man, I wonder how long it'll be before either consumer hardware or video generators become a thing that normal people can feasibly run.
@@CaidicusProductionsSomeone made a fp8 quant I believe and it can run within 20gb of vram I believe
When Blackforest Labs will release their video generator, it will kill all the other services.
What is the reason?
AI SEARCH = on the edge !
👍😀
interesting that the princess running away from the dragon doesn't work. There should be so many 'person running away from something'' reference videos to learn from
MiniMax looks the best so far.
13:17 she's not walking look closely she's moonwalking backwards!
Could you show us all how to download and run it locally for all us noobs eager to learn.
moshi and mochi? next it's momoshi and mochi...lol
can genmo do image to video please answer
Purz tested the smaller Mochi 1.
Unfortunately not available in Pinokio.
minimax still the best
I hope that they'll release video2video in the near future.
Open source?
It seems to be the best one in most cases, except it didn’t do so well at generating anime characters.
@@chariots8x230i won't say that you can do many things with a open source model like using with flux or many other things
@@Sujal-ow7cj no
Whats the Vram for Genmo?
is moch img2vid?
Thanks for the information
true genmo videos are too damn good but it required too much compute power that a normal pc cant handle it
6:05 "and kling.. everything is more fluid" oh yea, and casually not mentioning - in the Kling version it's not even a Unicon, which takes a 1/3 of the prompt's essence
Kling is blowing me away!
lol what a joke! 3 times in a row "video failed"! No credits left...
Girl swimming legs looked broken in the intro
Good comparison but I see minimax still the best ai generator on the platform
is local installation allow nsfw generating? 😁
Why u gotta know 🤨
@IPutFishInAWashingMachine why not
I think you can, which we can train it
I would imagine it can, however, you need basically a supercomputer to run Mochi 1. 4 H100 GPUs. Good luck buying that.
@@IPutFishInAWashingMachine kinda need that nsfw . just readed the forum. can make nsfw on it use plugin
what happened to sora?
sora? whats that?
@@theAIsearchHe means about sora openai
tutorial on how to install please!!!
We not escaping the false allegations with this one boys 🔥
amazing bear Kling
What is good, allegro! Fuck yeaa
Wait, I meant mochi, yeaaa!