I realized that for animating images to video in Runway, it's better to use Gen 3 Alpha Turbo, It seems that this option is specifically designed for animating images, whereas Gen 3 Alpha is just for generating videos without images. I found it challenging to animate an image properly with Gen 3 Alpha, but now with Gen 3 Alpha Turbo the images animate well.
I make AI films using Kling AI. Dream machine and Runway are great, but please do check out the Pro version of Kling (not the Standard version)! It's great with movement. Anyways - love your videos!
The prompt I'm looking for any video platform to handle is "a woman opens a door, walks into a room and sits down at a table." in terms of narrative storytelling That would change EVERYTHING! Or, "a car pulls up, the door opens and a man steps out, or "a man walks over to a car, opens the door, gets in and closes the door. In short, opening doors and sitting down. This is the glue that ties any narrative story together.
To be fair in an average movie your prompt would be like 3 or even more shots (first a close up of the door opening, than a little zoomed out from another angle where the woman walks into the room where you see the top half of her in the shot and after that a third shot with another angle were you can see her and the table in the picture). So if you do your prompts shot by shot you can do that already I would guess.
@@blubbblubbblubbish You could but it might seem awkward or overly complicated unless your objective is to draw attention to the action. But generally the idea is for it to flow naturally as connective tissue between shots.
Another great episode! Thanks for bringing us such great news. I would love to know more about the breakdown/howto that you mentioned towards the end, using runway to create a video from an image and adding custom lip sync. I would really like to learn how to create a longer video based on my own ai generated character and adding lip sync and customized, realistic expressions.
I don’t care about ‘Text to Video’ though. I’m only interested in how it does with ‘Image to Video’, because that’s what will allow me to have control and achieve consistency in storytelling.
@@Scramblefred2399 Personally, I’d like to have more control over the scene composition and maintain consistent characters, so I would only use ‘Image to Video’ for that. I don’t think I’d find much use for ‘Text to Video.’ Even if it could maintain a style, I’d probably only use it on a few rare occasions.
I agree with you, fact is, I really think we're not there yet. I'd say another year at least, probably more, to see some significant improvements. At the current state of the technology, we might get lucky with a shot or two, but overall there's still too much weirdness going on. At this stage the weirdness and imperfections are still far too distracting for an audience we just want to be captivated by the story.
@@veilofreality that’s where better post production processing comes into play I’ve seen the pros be able to technically edit out or buff the weird abhorrent behaviors that these image generators currently produce. I’m not that good yet myself but like I say the pros got them skills though.
I think the really interesting application of Luma Ai in contrast to Runway is the morphing between 2 images. So if you have two constantly similar images that show the movement of several objects, you can easily create an impressive result. Without the use of morphing on the basis of only one image, it is only possible to control one object. I am currently experimenting with the movement of four people !!!! - in a 3d scene between two images. I hope to be able to show the test on my channel soon
Also Luma has a loop feature which runway gen 3 doesnt have. Its been a useful feature used for my last few vids.. Different tools for different scenarios🤔
@@NGW_Studio wow your channel is awesome. That music is great. Shame about your subs , u just got 1 from me ;). I have another channel I seem to be gettin the subs , I'm on 1,040 now but the view count is pretty low.
@@NGW_Studiou should defo do tutorials on how u get great results using these AI music generators. I've kinda dabbled but never gotten anything out of it. I spent years in a band as midi programmers, samples and bass.
Gen3 is the one that have the less options and possibilities for the moment, but they add them gradually. I'm on the unlimited tier in Gen3 and on the premier tier in Kling, and definitely Kling is the better option right now in terms of quality, options, prompt comprehensiveness, etc.
Has anyone else noticed that Gen-3 generations are becoming increasingly slow? I was once able to generate 4 at a time but now I seem limited to just 2 and it's not uncommon for them to sit in the queue for 5 minutes or longer. I emailed Runway about it and they said the "excitement" over the new Turbo mode (which is useless for anyone who wants quality) is impacting the system. They also suggested I use my credits if I want better performance, which at 10 credits per second would go pretty quick. Anyone else experiencing a slowdown?
@@curiousrefuge yeah no lie ultimately the name of the game is pay to play 😅. Everyone trying to make a buck and no such thing as free lunch. I’ll have to pony up soon cause the speeds killing my film project output
@@Earthball_Productions Is that for the free version? I have the unlimited plan with Runway but the only thing that seems unlimited these days is the wait time.
Great update this week. You must be incredibly busy, but if you could walk us through the workflowthat Eccentrism uses it would be appreciated enormously.
WHEN both feel like following the prompt (and ref image), Runway really keeps geometry and anatomy a bit more accurate. However, when they don't... at least Luma creates usable content, where as Runway just spits out anything but close to the prompt, draining credits.... draining them fast too. I have the feeling that Luma is just more fun and able to understand the reference image, cooking something up based on it, that still cuts the mustard. Runway, when it decides to work, it really creates awesome content. In fact both do. For me it's Luma
@@protogram you've never used img to vid? Its wot all the best ai movie creators use. Text to vid is hopeless at consistent scene. U can create consistency thru MJ, Flux, mystic. Etc.
We need Runway to be better at creating motion from 2D images. It is always trying to make it look realistic. You get some really funky results if you type 2D illustration in the prompt.
thanks for the video, i just don’t get one thing, how did you manage to do txt2vid in gen3? i don’t have that option at all, if i don’t upload an image with the prompt the generate button is greyed out… what am i missing?
(SpongeBob) "Three days later"... literally, that's what I get with Luma, my picture to video is in the "queue" for three days, and "three days later", it's ready. This isn't the first time, second, third, but many times. Yes, I don't have any of the Luma plans, but at least let me know how fast can Luma create my project before I paid for a plan, but taking three days for "free"? I went to Runway, without any plan, eventhough they capped a limit per day, at least it's less then 10 minutes, but with a paid plan, it's less then 20 seconds.
For the fire colors I know for fireworks the elements in the firework cause different colors. I bet comets would also have different colorations due to their specific elements
I like their branding (dreaming is such a great way to explain AI videos these days) but they seem to be lagging behind quicker than Kling and Gen3. Imma stick with Gen3
"two-and-a-half-minutes" is, of course, the time when you PAY for Luma labs. The generation time on non-paid accounts is 24-hours+ at the time of this recording. It is also possible that the system has become confused because comets don't catch fire. The tail of a comet is ice sluffing off as the comet heats up. The "fire in the sky" is a meteor or meteorite depending on its destination.
Spent the last month making a music video with runway and their unlimited plan. It was fun but after so many blocked generations for no reason at all I've decided to cancel my sub and will try luma this month. I was able to get fun results with runway but until it's open source I can't support their heavy handed blocking. Many prompts that had absolutely no questionable content were blocked for no reason at all.
Totally agree. Runway has absolutely zero consistency in what is blocked and what isn’t. I wasn’t even trying to make anything inappropriate. Even apart from that too many times a generation just fails for no reason. It’s pretty tiresome.
All this debate is kind of a moot point. It all boils down to storytelling. AI or not you will still face the same number of artists who has basically nothing to tell 🤓
@@curiousrefuge I've been using SD for several months now, just curious why I can't get Midjourney to work. Do you need to buy a subscription first, before you get the free trials?
Both of these (and all) platforns have simple prompt nuances in the documentation that you should be taking into consideration - putting in a noob basic prompt I guess is ok to show what the average Lehmann might do on them but it's just lazy reporting to not actually try and attempt to read their documentation to tailor the prompts to how they're built> I guess this channel is aimed at casuals, but still.
@@jzwadlo this is not necessarily true. I've collected 1000s and 1000s of prompts right from the start. (Literally gb 's of screenshots) Some of the best prompts ever have been 3 basic words. I've had this vicious argument b4 and it's always with sum1 who is a non artist and has a massive knowledge in computer science? Maybe you need a little of the latter, but this is aimed more at artists. I do agree with you , you should follow the , camera movement - subject - style of etc. But most of that heavy lifting is done in the image generation. Be interesting to see your formula
Hey Curious Refuge , really nice video ! I was wondering if I could help you with more Quality Editing in your videos and also make a highly engaging Thumbnail and also help you with the overall youtube strategy and growth ! Pls let me know what do you think ?
Great update! Tutorials are always welcome. Mind bending technologies.
Yes , would love the runway with live portrait tutorial ! Great stuff as always 👍🏽👍🏽
I realized that for animating images to video in Runway, it's better to use Gen 3 Alpha Turbo, It seems that this option is specifically designed for animating images, whereas Gen 3 Alpha is just for generating videos without images. I found it challenging to animate an image properly with Gen 3 Alpha, but now with Gen 3 Alpha Turbo the images animate well.
Interesting! We'll do some tests and see if we have some conclusion
Thank you for featuring SPACE VETS! Looking forward to the office hours with our art director @TheButchersBrain
Heck yeah. So excited to hear from yall!
I make AI films using Kling AI. Dream machine and Runway are great, but please do check out the Pro version of Kling (not the Standard version)! It's great with movement. Anyways - love your videos!
Thanks we'll do some more testing!
Please make a video on Live Portrait/Runway ML
Will do :)
@@curiousrefuge is it ready somewhere?
This tech is coming at us FAST!
Agreed!
The prompt I'm looking for any video platform to handle is "a woman opens a door, walks into a room and sits down at a table." in terms of narrative storytelling That would change EVERYTHING! Or, "a car pulls up, the door opens and a man steps out, or "a man walks over to a car, opens the door, gets in and closes the door. In short, opening doors and sitting down. This is the glue that ties any narrative story together.
To be fair in an average movie your prompt would be like 3 or even more shots (first a close up of the door opening, than a little zoomed out from another angle where the woman walks into the room where you see the top half of her in the shot and after that a third shot with another angle were you can see her and the table in the picture).
So if you do your prompts shot by shot you can do that already I would guess.
@@blubbblubbblubbish You could but it might seem awkward or overly complicated unless your objective is to draw attention to the action. But generally the idea is for it to flow naturally as connective tissue between shots.
I agree. My biggest hurdle in communicating with the ai is this sort of seemingly common sense thing
Why would you need that? Just get the man to morph into the car…
@@DJBrewsAI 😂🤣
Another great episode! Thanks for bringing us such great news. I would love to know more about the breakdown/howto that you mentioned towards the end, using runway to create a video from an image and adding custom lip sync. I would really like to learn how to create a longer video based on my own ai generated character and adding lip sync and customized, realistic expressions.
We appreciate you watching and we'll work on that breakdown :)
Yeah a video on that process would be great 😊 cheers
I don’t care about ‘Text to Video’ though. I’m only interested in how it does with ‘Image to Video’, because that’s what will allow me to have control and achieve consistency in storytelling.
Unless it can maintain a style through text prompt
@@Scramblefred2399 Personally, I’d like to have more control over the scene composition and maintain consistent characters, so I would only use ‘Image to Video’ for that. I don’t think I’d find much use for ‘Text to Video.’ Even if it could maintain a style, I’d probably only use it on a few rare occasions.
I agree with you, fact is, I really think we're not there yet. I'd say another year at least, probably more, to see some significant improvements. At the current state of the technology, we might get lucky with a shot or two, but overall there's still too much weirdness going on. At this stage the weirdness and imperfections are still far too distracting for an audience we just want to be captivated by the story.
We totally understand - however Text to Video does have its use cases!
@@veilofreality that’s where better post production processing comes into play I’ve seen the pros be able to technically edit out or buff the weird abhorrent behaviors that these image generators currently produce. I’m not that good yet myself but like I say the pros got them skills though.
I think the really interesting application of Luma Ai in contrast to Runway is the morphing between 2 images. So if you have two constantly similar images that show the movement of several objects, you can easily create an impressive result. Without the use of morphing on the basis of only one image, it is only possible to control one object. I am currently experimenting with the movement of four people !!!! - in a 3d scene between two images. I hope to be able to show the test on my channel soon
would be interested to see your progress, until know I only had appalling 😱results when I tried to control multiple characters inside a shot..😅
@@veilofreality After 24 hours, the calculation on Luma AI is complete. I will publish it in a video on my channel soon
Can't wait until we have things like 3-4 frame that we can use to have more control :)
SPACE VETS let’s go! We’ve come a long way since the SEERESS!
Also Luma has a loop feature which runway gen 3 doesnt have. Its been a useful feature used for my last few vids..
Different tools for different scenarios🤔
@@NGW_Studio wow your channel is awesome. That music is great.
Shame about your subs , u just got 1 from me ;).
I have another channel I seem to be gettin the subs , I'm on 1,040 now but the view count is pretty low.
@@NGW_Studiou should defo do tutorials on how u get great results using these AI music generators. I've kinda dabbled but never gotten anything out of it. I spent years in a band as midi programmers, samples and bass.
Gen3 is the one that have the less options and possibilities for the moment, but they add them gradually. I'm on the unlimited tier in Gen3 and on the premier tier in Kling, and definitely Kling is the better option right now in terms of quality, options, prompt comprehensiveness, etc.
@@armondtanz yes thank you I appreciate your support! I plan on making tutorials in the future, rn I'm too addicted making random videos lol
@@NGW_Studio u gotta watch it. UA-cam algorithm hates random videos.
Lives same format just churned out 😭😭😭
I'd love the tutorial! Thanks for everything!
Our pleasure!
Has anyone else noticed that Gen-3 generations are becoming increasingly slow? I was once able to generate 4 at a time but now I seem limited to just 2 and it's not uncommon for them to sit in the queue for 5 minutes or longer. I emailed Runway about it and they said the "excitement" over the new Turbo mode (which is useless for anyone who wants quality) is impacting the system. They also suggested I use my credits if I want better performance, which at 10 credits per second would go pretty quick. Anyone else experiencing a slowdown?
Heck yeah the free version is slo as molasses and currently they don’t let free users utilize gen 3 apparently or pro only gen 2 😢
It could be that since showing new features like the recent scene extension - the servers are being held back a bit?
@@curiousrefuge yeah no lie ultimately the name of the game is pay to play 😅. Everyone trying to make a buck and no such thing as free lunch. I’ll have to pony up soon cause the speeds killing my film project output
Luma is worse, 17 hours in queue, and now they put me in a never ending waitlist
@@Earthball_Productions Is that for the free version? I have the unlimited plan with Runway but the only thing that seems unlimited these days is the wait time.
luma being able to have a sign that says love you and have a car come down a road that fast just blew my mind
Wow, I havent used this in a while cant wait to try it out with new features.
Us, too!
Great update this week. You must be incredibly busy, but if you could walk us through the workflowthat Eccentrism uses it would be appreciated enormously.
WHEN both feel like following the prompt (and ref image), Runway really keeps geometry and anatomy a bit more accurate. However, when they don't... at least Luma creates usable content, where as Runway just spits out anything but close to the prompt, draining credits.... draining them fast too. I have the feeling that Luma is just more fun and able to understand the reference image, cooking something up based on it, that still cuts the mustard. Runway, when it decides to work, it really creates awesome content. In fact both do. For me it's Luma
Shame they dont have an unlimited plan.
Only psychopaths use text to vid. Us sane ppl will always image to vid.
@@armondtanz actually, I have never tried that. Would be interesting to know whether people actually use this feature at all. lol
@@protogram you've never used img to vid? Its wot all the best ai movie creators use. Text to vid is hopeless at consistent scene. U can create consistency thru MJ, Flux, mystic. Etc.
We appreciate your feedback and the breakdown. It'll be interesting to see how these tools play out over the next few months.
It's great and powerful to have all these tools.
We think so, too!
Yes , would love the runway with live portrait tutorial
Thanks, we'll work on it!
Yes! Would love a tutorial. Thank you!
We'll work on it!
@@curiousrefugelooking forward to it!
Sketch 2 Scene is what will be so beneficial
We think so too!
We need Runway to be better at creating motion from 2D images. It is always trying to make it look realistic. You get some really funky results if you type 2D illustration in the prompt.
Good point! 2D is tough!
what a fascinating debate! It’s amazing to see how Luma 1.5 and Runway’s AI video generator are pushing the boundaries of creativity.
These tools are getting better and better each week :)
Wow that midjourney online editing is pretty incredible, what does the prompt "cinematic" why do you use that in all your prompts?
It helps the style direction of the prompt.
Awsome! Liked and subbed! Can we please have that tutorial? 🙏😁
Do you know the name of the liipsync software they used for that 2d look?
We'll find out soon :)
@@curiousrefuge so it has been 2 weeks.. Do you know what program they used for the lipsync by chance?
Hello! Thanks for the interesting video. Where can I see "How to Escape the Matrix"?
Our pleasure! We should have it linked shortly
I'm not sure if you're aware of this, but if you do multiple gens with Runway you often get better results.
Very true! We certainly know this :)
Please create a tutorial showing how mimic what EccentrismArt did.
Thanks! We'll work on it :)
Hi, I really liked the video for kids you showed in your video. Could you explain me in more detailed way how I contact the filmmakers through Discord
Space Vets? We'll have an office hours with them soon!
thanks for the video, i just don’t get one thing, how did you manage to do txt2vid in gen3? i don’t have that option at all, if i don’t upload an image with the prompt the generate button is greyed out… what am i missing?
Hmmm, Gen3 definitely has it. Are you missing credits ?
@@curiousrefuge no i just didn’t see I was on turbo 🤦🏻♂️ thanks for replying
(SpongeBob) "Three days later"... literally, that's what I get with Luma, my picture to video is in the "queue" for three days, and "three days later", it's ready. This isn't the first time, second, third, but many times. Yes, I don't have any of the Luma plans, but at least let me know how fast can Luma create my project before I paid for a plan, but taking three days for "free"? I went to Runway, without any plan, eventhough they capped a limit per day, at least it's less then 10 minutes, but with a paid plan, it's less then 20 seconds.
True! this issue has been reported and is getting worse for many people
Yes please, make that tutorial on RunwayML/ Live Portrait :D
Will do :)
Did you just "tear" that pizza off? You must have been hungry. Good vid.
Starving… all pizza manners went right out the window. Haha
For the fire colors I know for fireworks the elements in the firework cause different colors. I bet comets would also have different colorations due to their specific elements
that's true!
I like their branding (dreaming is such a great way to explain AI videos these days) but they seem to be lagging behind quicker than Kling and Gen3. Imma stick with Gen3
That’s totally fair. All three are great and have advantages and limitations.
"two-and-a-half-minutes" is, of course, the time when you PAY for Luma labs. The generation time on non-paid accounts is 24-hours+ at the time of this recording. It is also possible that the system has become confused because comets don't catch fire. The tail of a comet is ice sluffing off as the comet heats up. The "fire in the sky" is a meteor or meteorite depending on its destination.
It's true - we've heard of some extremely long wait times for Luma.
Spent the last month making a music video with runway and their unlimited plan. It was fun but after so many blocked generations for no reason at all I've decided to cancel my sub and will try luma this month. I was able to get fun results with runway but until it's open source I can't support their heavy handed blocking. Many prompts that had absolutely no questionable content were blocked for no reason at all.
Been there done that :) However, I eventually managed to create a few client projects with some brutal stuff!
Totally agree. Runway has absolutely zero consistency in what is blocked and what isn’t. I wasn’t even trying to make anything inappropriate. Even apart from that too many times a generation just fails for no reason. It’s pretty tiresome.
That is totally fair. Luma or Kling are great tools to use if you keep getting blocked.
All this debate is kind of a moot point. It all boils down to storytelling. AI or not you will still face the same number of artists who has basically nothing to tell 🤓
Very good point! Good storytelling, above all, is the most important thing.
What is KEEPS's website? I cannot find it.
It's a research white paper per this video.
True! Just a white paper right now
I can't seem to get the 25 free trial credits in MJ. It always asks me to get a paid subscription first.😢
Have you tried any alternatives too?
@@curiousrefuge I've been using SD for several months now, just curious why I can't get Midjourney to work. Do you need to buy a subscription first, before you get the free trials?
I look forward to the ai / unreal engine video :)
Us too! :)
3:02 ...there is no spoon
Follow the white rabbit!
Runway is still better for text to video.
Luma 1.5 is better for image to video.
Just my experience.
We appreciate your feedback!
Runway with live portrait please 🙏
Will do! :)
In run way have to write down the camera movement first else it makes some crap
True, the camera directions is very important!
In my opinion, kling ai is better than both
We love Kling too :)
Try to get anything to give you a goat. Everything always gives me either a sheep or some kind of hybrid. 😞
Haha that’s really funny.
luma didnt work.....why ?!😭
Check out our latest content for up to date Luma info!
yes do the tutorial
Will do!
Sadly LTX studio is far too expensive to use at all. One preview burnt through my entire credits.
Oh no! Sorry to hear that :(
interesting
We appreciate you watching!
GG😊
GG to you too!
U r AI generated teacher
Nope but we get that a lot! :)
@@curiousrefuge u have a glitch in your glasses that happens a lot, u r the reason I won't trust what I see, ever again.
There is no way to keep up with these AI apps. It's going to be like this for at least another year or so.
More like 10 years 🤣 we won’t be manually filming or animating in 10 years, potentially 5
It's true - it's very difficult!
Both of these (and all) platforns have simple prompt nuances in the documentation that you should be taking into consideration - putting in a noob basic prompt I guess is ok to show what the average Lehmann might do on them but it's just lazy reporting to not actually try and attempt to read their documentation to tailor the prompts to how they're built>
I guess this channel is aimed at casuals, but still.
@@jzwadlo this is not necessarily true. I've collected 1000s and 1000s of prompts right from the start. (Literally gb 's of screenshots)
Some of the best prompts ever have been 3 basic words.
I've had this vicious argument b4 and it's always with sum1 who is a non artist and has a massive knowledge in computer science?
Maybe you need a little of the latter, but this is aimed more at artists.
I do agree with you , you should follow the , camera movement - subject - style of etc. But most of that heavy lifting is done in the image generation.
Be interesting to see your formula
Fair feedback!
Luma dream machine has long waiting list, weeks of queue waiting time, very expensive dead brain. It is the worst AI tool ever.
The long wait is quite frustrating for free users.
MJ looks so primitive when compared to ideogram and flux. But hey, let MJ keep focusing on all those BS features no one asked for. 😂😂😂
They all have their pros and cons, but MJ clearly has more control
LUMA is trash lol takes 2/3hrs to generate videos
On the free plan, but paid only takes a couple minutes.
first
Luma Pricing
$0,00 (30 Generations per month / Standard priority / Non-commerical use)
$7.99 (70 Generations per month / High priority / Non-commercial use)
$23.99 (150 Generations per month / High priority / Commercial use / Remove watermark)
$51.99 (310 Generations per month / Highest priority / Commercial use / Remove watermark)
$79.99 (480 Generations per month / Highest priority / Commercial use / Remove watermark)
$399.99 (2.430 Generations per month / Highest priority / Commercial use / Remove watermark)
Take that, Runway!
@@protogram isnt runway 100 bucks for unlimited?
@@armondtanz $76,00 unlimited runway gen3
We appreciate the breakdown!
Hey Curious Refuge , really nice video ! I was wondering if I could help you with more Quality Editing in your videos and also make a highly engaging Thumbnail and also help you with the overall youtube strategy and growth ! Pls let me know what do you think ?
I'm not sure we need that right now but we appreciate you offering!
if its not Chinese its not AI video
Haha, interesting perspective!
Please make a video on Live Portrait/Runway ML
thanks!