4:00 - Tip, you can use images from other tools. Add the image you want as an image to image prompt. Just write duplicate or something to that affect as your prompt and set the scale for the Img2Img to 60. Basically, Leonardo will 1:1 replicate the image and then wala, you can make a video with it now.
I wish Midjourney would bring *‘consistent characters’* and *‘inpainting’* first. Also, I wish we could *‘pose multiple consistent characters together’* in scenes that we generate in Midjourney. Having multiple consistent characters interacting with each other in an image is really important for storytelling, and that includes AI movies. If Midjourney accomplishes all of these features first, AI video would become much more useful when released. It won’t take long before people can start using it to make full-length AI movies.
That would be a HUGE game changer if consistent character features was a feature in MJ! That would certainly make the prompting generations easier and fewer :)
For that Runway generated clip around the 8:20 mark - one thing that really throws it off is that it only applied motion to the plant in the very foreground. The plants in the background, which would still be responding to the same wind effect, are frozen still.
4:15 In the Webshow contest, I believe the first one was made by Pika Labs. The second one was made by Leonardo, and the third one was created by Runway.
V6 is amazing but I can't make a masterpiece without "inpainting" it's my favorite tool. So I switched back to V5 for now until V6 is more stable and proper.
I know right? It would be so useful! Imprinting text alone would be game changing for MJ. That and consistency but maybe we’ll get that eventually. I expect the inpainting will come when it comes out of the testing phase
4:05 in that order: Leonardo, Pika Labs, Runway . Showout to a brazilian fun of your work (and also i have to re subscribed your yt again) Thanks 4 the vid.
until we have completely decentralized tools, we are just one scandal away from every one of these platforms being nerfed, useless and generic. I feel that everyone is working on the wrong problem trying to be the photoshop of ai when it will be shut down overnight at some point. We need to make it impossible to shut down with decentralized, crowd sourced solutions only.
Interesting perspective. That would certainly be a disruptive event to say the last. Luckily, we believe 2024 is going to be the year where lots of this will be start to be settled (at least the beginning) as these tools are advancing at such a fast rate.
Agreed. I don't like Midjourney or their business model. Their system and community benefits from advancements in other Stable Diffusion-based projects, but they give nothing back. They also use their community's image generations to train and improve their closed-source model, only. None of the improvements they implement positively affect open source projects in any way, afaik. In fact, MJ likely takes users away from FOSS projects, where they could be contributing to the scene, since their service requires much less effort to get good results. And as you say, if anything happens to this company, all this progress will disappear.
Thanks for this! Great stuff! I am planning a project involving nature and fantasy/underground creatures, based on a classic story. Does not necessarily be rich in detail, but rich in emotion and movement, possibly dance-like. Is there any paticular tool youmight recommend?
hello, i just discovered your channel and saw a few beautiful shots u generated with runway, & i was wondering why when i tried the free version of runway (yesterday) i had such awful results (I tried image+prompt to video & video to video) When trying image+prompts my charachter was litterally just blowing up each time the camera was moving & it was impossible for me to make him perform any action whithout him being entierly distorded, and when trying video to video my result were never realist even though i asked for a realist style in the prompt + was in really bad quality (the original video was a person smoking, and the result always looked like a 240p plasticine stop motion...) I was wondering if anything of that changes with the non-free liscence or if maybe Runway is the wrong choice for me (considering that i want to use ai video generating to animate runway pictures (with the final goal of using those to create social media content to promote my music) thank you for reading me!
We help people improve their experience using AI tools to make AI movies. You can learn more at curiousrefuge.com . These tools sometime take a lot of testing before getting the best results.
Hey I wanna purchase a few generative ai tools. Could u pls recommend me on which ones should I go forward with . I need it for ai filmmaking and ai imagery
There is a way to animate in Leonardo images created in Midjourney, go to Image Generation, Image Guidance and the upload your Image and set Strength to the max. that way you are telling Leonardo not to be creative and use it as template. I have tried a few and works.
from what i can see leonardo has the best quality overall. it has some issues and you dont have quite as fine tune control but for image quality overall go with leonardo . i think thats what im gonna do. while i wait for ltx and sora oh without looking the 2nd one is leo ;-) it does that little rotation thing which i love and seems the sharpest of the 3
I really need help on it so i am creating an ai video basically it is war based so which image to video ai should i go with ? and buy ... Runway Pika or anyother also does stable diffusion & comfyui do image to vid ? need suggestions
typical Runway generation with that first second filled with high level anxiety, dread and hope where you're like "please don't morph and melt into something as random as blurry, please please pleaaaaaase"... and spoilers, it does.
That's basically how the AI animations start once they are generated (short clips with little motion). For demo purposes, it makes it a bit easier for us to show off too the differences.
My dream app would be a latent diffusion model that runs on iPad for use with an Apple Pencil that interacts by voice. Imagine drawing a crude cat and then telling the AI engine, “this is a cat with orange fur and a stubby tail” It would generate that and let you instruct changes by pointing or circling areas with the pencil. Or add new parts of the image. All the time maintaining a conversation with the user as if it were an artistic assistant.
22:57 As a media composer I’m a little nervous about the Suno stuff but I’m also excited because there’s so many amazing things you can use this stuff for, as music tools. I find it interesting that it can do pop music so well, but for orchestral fill music it is still god-awful. I wont get too comfortable though im sure thats just because they havent tries to train it properly yet and im going to get a shock at some point when it takes another leap forward
Fair point - it will be interesting to see if it can improve the orchestra fill by the end of the year. That one is tough as it requires super clean audio of each instrument to work well. But once it does, that will be awesome!
Would it really be voice acting if it gets to the point where you can have your voice automated, without even having to give it much source material? You wouldn't be acting with your voice, you'd just be lending your voice and having the intelligence to the "acting" for you
I think that basically already exists. I’ll talk about it next week. But there are services brining people back from the dead like dead relatives and using it for therapy/healing etc.
in some time in the future your complete video just could be the result of prompting " Create an AI news video with a presenter in a green pullover and glasses. Add some example while the presenter is talking about it. Create that video weekly and upload it on youtube". And noone would know if you are a real human or an AI generation
I don't get how it's a good thing that people won't be able to tell the difference, you could use it for all sorts of malicious purposes. And you'll be less able to trust what you see online, less than it already is right now
It's true - it is scary what people can do with such powerful tools. Even photoshop for over a decade now has been used for all types of malicious purposes (and AI won't be any different). For us, we are focusing on, and trying to foster and environment of all the good, interesting, and artistic things people can do. We hope there will be measures taken to help reduce the amount of malicious actions taken with these tools (whether by the tool makers, law, or other things).
Hellos! How are you doing can we do some collaborate? My film ressusciter just won seberal awards in Europe! I am close to San Diego Let me know if we can arrange a screening, or else!
I had pretty good animation in 3 sec videos from Pika in Feb but just last night it was horrible. The character movement was spastic completely unusable. No matter how many times I tried to get it to generate words in correct orientation (i.e. not in mirror reversed order with letters backwards it could not understand) So I really don't know how anyone gets much out of any AI generated videos. To get length you need to edit tons of 3 sec clips together. It is a long, long process and not as easy as all these YT videos claim. Often an AI generated dog will have 5 legs or legs coming out the butt or two legs on one side and a stub where that second leg should have been. Mostly AI is a lot of hype and little more.
Why would that be a good thing? Also all of this doesn't mean that people won't be able to still act to star in movies, perhaps it'll be indie movies but yeah. Also not everyone that acts is a major film star, you have plenty of people that work as backgrounds or as minor characters, and who don't make as much money as major film stars do
I personally don't know that I'd be as engrossed in a performance from something that's generated than a performance from a human. Sure you could say that the technology is cool, but it's not like someone had to prepare for the role. The ability for someone, even if they're as well known as they might be, to fully embody a character is very impressive and fun to watch.
That's an interesting thought - I'm not entirely sure if that's actually the goal though. Remember - these tools are trained on existing data (including human performances) and therefore so much of what's beautiful about what's generated is that...it's partly human in its inception! So rather than replacing, we're looking for ways to expand.
We agree with much of your sentiment. At the end of the day, no matter how impressive the tools get, it's human emotions and storytelling that make content good.
Hehe I see what you did there. But remember - despite how accessible the tools get the core human touch like storytelling and empathy are what makes the difference between bad and good films!
@@curiousrefuge Oh, so you believe Hollywood will get into it, too. If so, that would level playing field between indies and the big guys. Let the most creative and pleasing win. And that would kill woke.
Looks like a cheap version of bad movies with videogame like effects. I don't like the real ones, why would I watch the bad looking AI? When you are able to produce something like Saving Private Ryan, let us know. But it will never happen because you need real artists and pioneers to create that kind of movies. Not some generic program.
There are certainly lots of technical issues and we have quite some time before we can get to "Saving Private Ryan" (which is...not just a movie, but one of the best movies of all time). What we are celebrating is what creators are doing with the current limitations and enjoying how fast the technology is helping empower people to create their own stories at a better and faster rate.
4:00 - Tip, you can use images from other tools. Add the image you want as an image to image prompt. Just write duplicate or something to that affect as your prompt and set the scale for the Img2Img to 60. Basically, Leonardo will 1:1 replicate the image and then wala, you can make a video with it now.
Oh! Good call we'll have to try that. Thanks, Christopher!
There is just so much here to dive into and everyday more. Wow. Thanks for keeping us updated.
This channel is so informative. You are certainly doing a wonderful thing for the Ai community, thank you!
Loved the video!
1. Pika Labs
2. Leonardo
3. Runway
I'm sure sure he's right I may not bother voting. 2 is Leonardo for sure.
We'll see...!
yup i was right with number 2!!!!!!!!!
That was a lot of work in 25 minutes! Thank you!!
You're welcome! Glad you enjoyed the episode!
1. Pika
2. Leonardo
3. Runway
Loved the video
correct
I wish Midjourney would bring *‘consistent characters’* and *‘inpainting’* first. Also, I wish we could *‘pose multiple consistent characters together’* in scenes that we generate in Midjourney. Having multiple consistent characters interacting with each other in an image is really important for storytelling, and that includes AI movies.
If Midjourney accomplishes all of these features first, AI video would become much more useful when released. It won’t take long before people can start using it to make full-length AI movies.
That would be a HUGE game changer if consistent character features was a feature in MJ! That would certainly make the prompting generations easier and fewer :)
1. Leonardo
2. Runway
3. Pika
Great video bro.
4:14
1.Runway
2.Leonardo
3. Pika
Thanks for the video. Genmo should also be on the list.
wrong
Wow! Just amazing… Thanks for your outstanding work.
Thanks for the video. One of the most complete narration of the AI creative scenario. 🙏🏻 love it
1. Pika
2. Leonardo
3. Runway
Thanks to you for watching and engaging with us! We'll reveal the order of answer soon :)
in brazil pika means other thing
For that Runway generated clip around the 8:20 mark - one thing that really throws it off is that it only applied motion to the plant in the very foreground. The plants in the background, which would still be responding to the same wind effect, are frozen still.
Fair point and good catch! Thanks for watch, Edward!
4:23 Photo #2 Leonardo winner
Amazing Caleb!!! With so many AI tools, this year will be a blast ✨❤
Thanks for watching and commenting. We agree, 2024 is going to be crazy!
1. Runway
2. Leonardo
3. Pika Labs
Great video
1. Pika labs
2. Leonardo
3. Runway
Thanks for all the excellent content.
Glad you enjoy it!
Awesome video - Watching from the UK.
I think it's:
1 - Pika Labs
2 - Leonardo
3 - Runway
We appreciate you watching all the way from the UK ! :)
congrats on the forbes shout out!
👍👍👍
1. Pika Labs
2. Leonardo
3. Runway
amazing new!!!! can't wait to use all the new stuff thank you🙏🤓
Thanks for watching and supporting. Love having you in the community :)
😘✌@@curiousrefuge
1 Pika labs, 2 Leonardo, 3 Runway. Excellent content 😉
Love the videos
1. Pika labs
2. Leonardo
3. Runway
👏👏❤️🔥👌👊
Good job Stache.
Thanks!....and the stache stays! :)
Thank you for the interesting analysis!
Glad it was helpful!
Thank you! It was nice!
4:15 In the Webshow contest, I believe the first one was made by Pika Labs. The second one was made by Leonardo, and the third one was created by Runway.
Good guess...but we'll reveal soon! :)
1- Pikalabs 2-Runway 3-Leonardo !
V6 is amazing but I can't make a masterpiece without "inpainting" it's my favorite tool. So I switched back to V5 for now until V6 is more stable and proper.
I know right? It would be so useful! Imprinting text alone would be game changing for MJ. That and consistency but maybe we’ll get that eventually.
I expect the inpainting will come when it comes out of the testing phase
18:44 "Domo AI" looks like a win. ty for all the shares today 🥨👍
Too easy
1. Pika
2. Leo
3. Runway
😘
It's so easy to make full songs with Suno. And they have stereo sound now. It's crazy good!
Suno is pretty amazing isn't it
they got an update too! i think im gonna get it and leo this month! @@curiousrefuge
Leonardo video generation is really cool
4:05 in that order: Leonardo, Pika Labs, Runway .
Showout to a brazilian fun of your work (and also i have to re subscribed your yt again) Thanks 4 the vid.
Good guess :). We appreciate you watching, Turo!
until we have completely decentralized tools, we are just one scandal away from every one of these platforms being nerfed, useless and generic. I feel that everyone is working on the wrong problem trying to be the photoshop of ai when it will be shut down overnight at some point. We need to make it impossible to shut down with decentralized, crowd sourced solutions only.
Interesting perspective. That would certainly be a disruptive event to say the last. Luckily, we believe 2024 is going to be the year where lots of this will be start to be settled (at least the beginning) as these tools are advancing at such a fast rate.
Agreed. I don't like Midjourney or their business model. Their system and community benefits from advancements in other Stable Diffusion-based projects, but they give nothing back. They also use their community's image generations to train and improve their closed-source model, only. None of the improvements they implement positively affect open source projects in any way, afaik. In fact, MJ likely takes users away from FOSS projects, where they could be contributing to the scene, since their service requires much less effort to get good results. And as you say, if anything happens to this company, all this progress will disappear.
Very cool and interesting.. unsure about a lot of the terms you use such as “artist”, and “filmmaker” though.
We understand your perspective but respectfully disagree. We appreciate you watching!
Did I miss? what was the lip sycn tool?
Thank you for amazing content!
We appreciate you watching! Which Lip sync tool are you referring to?
@@curiousrefuge It was dreamtalk, found it on the description below. I Should have looked first before asking . Thanks!
pika labs
Thanks for this! Great stuff! I am planning a project involving nature and fantasy/underground creatures, based on a classic story. Does not necessarily be rich in detail, but rich in emotion and movement, possibly dance-like. Is there any paticular tool youmight recommend?
Wonderful! We always recommend using Midjourney to craft your images (typically regardless of style).
1. Pika Labs
2. Leonardo
3. Runway ML
To me the 2th is the winner (I think it is Leonardo). 1 runway 2 leonardo 3 pika
Hello! Thank you so much for choosing my movie The End of the World in the Movies of the Week! 🤩🔥☠
Of course! Thank you for creating such great content! We're happy to share it!
❤@@curiousrefuge
hello, i just discovered your channel and saw a few beautiful shots u generated with runway, & i was wondering why when i tried the free version of runway (yesterday) i had such awful results (I tried image+prompt to video & video to video)
When trying image+prompts my charachter was litterally just blowing up each time the camera was moving & it was impossible for me to make him perform any action whithout him being entierly distorded,
and when trying video to video my result were never realist even though i asked for a realist style in the prompt + was in really bad quality (the original video was a person smoking, and the result always looked like a 240p plasticine stop motion...)
I was wondering if anything of that changes with the non-free liscence or if maybe Runway is the wrong choice for me (considering that i want to use ai video generating to animate runway pictures (with the final goal of using those to create social media content to promote my music)
thank you for reading me!
We help people improve their experience using AI tools to make AI movies. You can learn more at curiousrefuge.com . These tools sometime take a lot of testing before getting the best results.
Pika, Leonardo, Runway
1. Pika 2. Leonardo 3. Runway
Hey I wanna purchase a few generative ai tools. Could u pls recommend me on which ones should I go forward with . I need it for ai filmmaking and ai imagery
We'd recommend you check out Midjourney for image generation and Runway/Pika for animations those images.
@@curiousrefuge Runway or pika? I can’t afford both yet
Thanks to the author, now my videos will be even cooler.❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤
Glad we helped you out here. Thanks for watching!
Hey there,
1. Runway
2. Leonardo
3. Pika
1. Runway
2. Leonardo
3.Pika
There is a way to animate in Leonardo images created in Midjourney, go to Image Generation, Image Guidance and the upload your Image and set Strength to the max. that way you are telling Leonardo not to be creative and use it as template. I have tried a few and works.
Thanks for the tip!
1. Runway
2. Pika
3 .leonardo
from what i can see leonardo has the best quality overall. it has some issues and you dont have quite as fine tune control but for image quality overall go with leonardo . i think thats what im gonna do. while i wait for ltx and sora oh without looking the 2nd one is leo ;-) it does that little rotation thing which i love and seems the sharpest of the 3
That's not a bad choice at all. We think Midjourney's quality is about 20% better, but Leonardo is a slick tool. A great choice for sure!
Thanks for the specific updates. God bless.
Our pleasure! Thanks for watching!
1.pika 2.Runway 3.Leo
1. pika 2. leonardo 3. runway :)
4:10 Pika, leonardo, runwayml
I really need help on it so i am creating an ai video basically it is war based so which image to video ai should i go with ? and buy ... Runway Pika or anyother also does stable diffusion & comfyui do image to vid ? need suggestions
Both Pika/Runway would be excellent choices. Unless you are highly technical with a love for tinkering, I would not suggest starting with ComfyUI.
Number two is the winner.
Winner winner chicken dinner!
1. Pika labs
2. Leonardo
3. Runway
Leonardo #2
Replica is a good voice cloning for voice acting
Good call - thanks for adding that alternative and we appreciate you watching!
Who is the winner? It is no written in the description...
Coming soon!
Which AI is the best for making WW2 pictures and realistic and animated videos?
Uncesnored.
Uncensored would require you run your own model. But right now the best would be to start with Midjourney
@@curiousrefuge Thanks! Do you prefer Leonardo?
1.PİKA
2. LEONARDO
3.RUNWAY
1.Runway 2.Leonardo 3.Pika - without knowing Leonardo :P
For the 3 clips - No1 is Runway, No 2 is Leonardo, No3 is Pika Labs. TY for the video!
typical Runway generation with that first second filled with high level anxiety, dread and hope where you're like "please don't morph and melt into something as random as blurry, please please pleaaaaaase"... and spoilers, it does.
Hahahahah! We know that feeling all too well. Generate more, rinse and repeat :)
@@curiousrefuge *AND* $$$, and $$$, and oh wait: $$$
Pika
Runway
Leonardo
Decoherence does the real time prompt as well.
cuanto sale? estas pagina de IA son free o tiene costo de prueba.?
hay cursos que te enseñen a hacer film con IA $$$$
El curso de Realización cinematográfica cuesta $749 USD y el curso de Publicidad con IA cuesta $699 USD
Where was the showdown? There wasn't really any comparison.
Showdown was simply a comparison :)
Why are all these shots just static shots with a little interpolated motion?
That's basically how the AI animations start once they are generated (short clips with little motion). For demo purposes, it makes it a bit easier for us to show off too the differences.
Well i think 1:leonardo 2:pika 3:runway??? 🤪
#1 pika #2 leonardo #3 runway
My dream app would be a latent diffusion model that runs on iPad for use with an Apple Pencil that interacts by voice. Imagine drawing a crude cat and then telling the AI engine, “this is a cat with orange fur and a stubby tail” It would generate that and let you instruct changes by pointing or circling areas with the pencil. Or add new parts of the image. All the time maintaining a conversation with the user as if it were an artistic assistant.
Yes that would be awesome! I imagine after a few years all of these tools (including the best of the best) would easily be used on an iPad/tablet.
No1 is pika, no2 is Leonardo no3 is runway
They may have improved. However, they still look like cartoons and not like real life.
The prompts are the issue. I use complex prompts and negative prompts in Leonardo, without PhotoReal, and consistently get results that do look real.
@@georgew2014 Good to know. Thanks.
I too like Leonardo.
Fair point - there is still a lot of room for improvement here. We appreciate you watching!
Heart breaking part is mid journey is paid😢
True - many of these tools have costly servers so they need payment to stay running :)
The ww2 clip is nice… but carriers or planes from modern days does not fit in a pacific ww2 scenario… or was it called „ The Final Countdown“?😉😇
Hahah, good catch! Either way, thanks for watching :)
22:57 As a media composer I’m a little nervous about the Suno stuff but I’m also excited because there’s so many amazing things you can use this stuff for, as music tools. I find it interesting that it can do pop music so well, but for orchestral fill music it is still god-awful. I wont get too comfortable though im
sure thats just because they havent tries to train it properly yet and im going to get a shock at some point when it takes another leap forward
Check out Stable Audio. It seems to have a lot more dynamic abilities in terms of orchestral music, sound effects and creative electronic music vibes.
Fair point - it will be interesting to see if it can improve the orchestra fill by the end of the year. That one is tough as it requires super clean audio of each instrument to work well. But once it does, that will be awesome!
I say.the winner is Leonardo A.I.
Would it really be voice acting if it gets to the point where you can have your voice automated, without even having to give it much source material? You wouldn't be acting with your voice, you'd just be lending your voice and having the intelligence to the "acting" for you
I think that basically already exists. I’ll talk about it next week. But there are services brining people back from the dead like dead relatives and using it for therapy/healing etc.
in some time in the future your complete video just could be the result of prompting " Create an AI news video with a presenter in a green pullover and glasses. Add some example while the presenter is talking about it. Create that video weekly and upload it on youtube". And noone would know if you are a real human or an AI generation
I don't get how it's a good thing that people won't be able to tell the difference, you could use it for all sorts of malicious purposes. And you'll be less able to trust what you see online, less than it already is right now
It's true - it is scary what people can do with such powerful tools. Even photoshop for over a decade now has been used for all types of malicious purposes (and AI won't be any different). For us, we are focusing on, and trying to foster and environment of all the good, interesting, and artistic things people can do. We hope there will be measures taken to help reduce the amount of malicious actions taken with these tools (whether by the tool makers, law, or other things).
It's going to be a crazy 2024 for sure! It's hard to tell the difference in photos and soon video
We find pika is great but the videos need to be 4 seconds
Hellos! How are you doing can we do some collaborate? My film ressusciter just won seberal awards in Europe! I am close to San Diego Let me know if we can arrange a screening, or else!
things is going too fast 😭😭😭😭
It's crazy, right?! 2024 is going to be insane!
I had pretty good animation in 3 sec videos from Pika in Feb but just last night it was horrible. The character movement was spastic completely unusable. No matter how many times I tried to get it to generate words in correct orientation (i.e. not in mirror reversed order with letters backwards it could not understand) So I really don't know how anyone gets much out of any AI generated videos. To get length you need to edit tons of 3 sec clips together. It is a long, long process and not as easy as all these YT videos claim. Often an AI generated dog will have 5 legs or legs coming out the butt or two legs on one side and a stub where that second leg should have been. Mostly AI is a lot of hype and little more.
Very interesting - we've certainly heard some people are having a hard time controlling Pika's new motion. We'll do some more testing!
So, who's the winner in the video generation? number 2 looks the best.
In the description :)
Hopefully this makes acting as an industry redundant
Why would that be a good thing? Also all of this doesn't mean that people won't be able to still act to star in movies, perhaps it'll be indie movies but yeah. Also not everyone that acts is a major film star, you have plenty of people that work as backgrounds or as minor characters, and who don't make as much money as major film stars do
I personally don't know that I'd be as engrossed in a performance from something that's generated than a performance from a human. Sure you could say that the technology is cool, but it's not like someone had to prepare for the role. The ability for someone, even if they're as well known as they might be, to fully embody a character is very impressive and fun to watch.
That's an interesting thought - I'm not entirely sure if that's actually the goal though. Remember - these tools are trained on existing data (including human performances) and therefore so much of what's beautiful about what's generated is that...it's partly human in its inception! So rather than replacing, we're looking for ways to expand.
We agree with much of your sentiment. At the end of the day, no matter how impressive the tools get, it's human emotions and storytelling that make content good.
Would appreciate if you offered one free ai session.
Maybe have a - bring a friend day?
Good suggestion - we'll consider this!
Pika deform so much. ;(
There is certainly a lot of criticism on this point - I imagine all the tools will get significantly better and smoother throughout the year.
I'm testing open source and having better results .You can talk more about then.@@curiousrefuge
Soon, film production will be coding. Hollywood, learn to code.
Okay but isn't coding also being automated?
@@robo_t So, what then? Learn landscaping? What?
Hehe I see what you did there. But remember - despite how accessible the tools get the core human touch like storytelling and empathy are what makes the difference between bad and good films!
@@curiousrefuge Oh, so you believe Hollywood will get into it, too. If so, that would level playing field between indies and the big guys. Let the most creative and pleasing win. And that would kill woke.
The dinging repetitive piano note you have subtlety playing under your entire video is slowly driving me insane. Please stop.
Ahhhh we'll think about that for next time. Thanks for the comment and we'll try something different.
Looks like a cheap version of bad movies with videogame like effects. I don't like the real ones, why would I watch the bad looking AI? When you are able to produce something like Saving Private Ryan, let us know. But it will never happen because you need real artists and pioneers to create that kind of movies. Not some generic program.
Anyone can make a film, but not everyone is a filmmaker
There are certainly lots of technical issues and we have quite some time before we can get to "Saving Private Ryan" (which is...not just a movie, but one of the best movies of all time). What we are celebrating is what creators are doing with the current limitations and enjoying how fast the technology is helping empower people to create their own stories at a better and faster rate.
looks terrible? lmfao? even worse when you know how its made
We recognize there is still a long way to go. We think 2024 is going to be a big year for the improvements!
1. Pika
2. Leonardo
3. Runway
1.Runway
2.Leonardo
3.Pika
1. Runway
2. Leonardo
3. Pika
1. Pika
2. Leonardo
3. Runway
1. Runway
2. Leonardo
3. Pika
1. Pika
2. Leonardo
3. Runway