Do you have access to Kling? If so, I'd love to see how Kling compares to Runway and Luma. Also, if you have it, can you give us some instructions on how to get access in the US on an Android phone? Thanks!
The biggest issue with Runway is the cost. It's very expensive to generate a single 5 second video, and if the video comes out incorrect (which I've had several times) and you have to "re-roll", it becomes far too expensive. Also, the lack of a source image (for now) hurts Runway as it's hard to get a consistent look and characters without a source. Luma is my choice so far, but only due to cost.
For now, for me at the moment it's horses for courses from my experiences on both. Runway: Positives: Greater Realism, Variety, more Natural. Negatives: WAY too expensive. Can't choose input images to aid in generations. Simply cannot make the camera go like 'really quick' at all. Can only choose 5 or 10 seconds and that's it... Luma: Positives: Way more value, Way more variety I found - I can make it do warp speed really well, Runway wont move fast no matter what you type. Lots of flexibility for unlimited video lengths with start and end frames. Negatives: Auto-prompt vs not is still hit and miss no matter how well you can write a prompt. Generations are a bit slow. Video quality is markedly worse in comparison. Sora is still miles ahead and remember, we got those samples like last year? I'm sure it's even further ahead now, but stoked to see what Best-in-Class Midjourney will do!
I have used Luma - it's fun. It still has some way to go - I used seed images for some interesting and humorous results. For example, I put in a picture of my nephew and wanted Luma to create a video of him putting on sunglasses. Luma gave him three arms to help him put on his sunglasses :) It's just a matter of time before these video generators become seamless. Looking forward to unleashing my creativity
Runway's image to video (Gen 2) can work like Gen 3 but you'd have to include the word "character" if you have one in the scene and "forest" or "this forest" if you want to keep a forest picture the same otherwise it will overwrite your image. By including these words and hyper realistic etc it will extend your existing scene.
I want to know how these AI tools perform with *‘Image to Video’.* I don’t really care about *‘Text to Video’* because I don’t have much control over the scene with that.
Luma allows Image to Video and tends to have better results as the user has more control over the scene. I have uploaded a few movie trailers created with Luma Labs. On my channel. Thanks
I believe Luma is better than Runway for several reasons. First, Luma generates better physics, allowing for a more realistic portrayal of the world and its characters. Additionally, Luma creates more fluid and organic character movements, giving the creations a sense of life and naturality. While it sometimes produces rather absurd results, I think this is just the beginning. In the future, Luma has immense potential to improve and become an even more effective tool. 🚀✨
"Great comparison, AI Samson! I've been considering both Luma and Runway for my projects and this breakdown is super helpful. For those who have used both, which one do you find more intuitive for beginners? Also, any tips on getting the best results from the tool you prefer would be much appreciated!"
Thanks Dophler! They both have their strengths, I think the transitions and image prompting make Luma a more useful tool. But the pure quality of text-to-vid in Runway is insane
@@aisamsonreal @aisamsonreal That's a great breakdown! 🌟 The way Luma handles transitions is pretty slick, isn't it? And yeah, Runway's text-to-video quality is just mind-blowing! 🚀 Has anyone tried mixing tools for different segments of the same project? Would love to hear creative uses!
@i-Dophler I haven't tried runway yet but I did subscribe to Luma for 1 month. I was very disappointed after nearly 150 of my monthly generations I only got about 4 seconds of actual usable content. I found it really hard to prompt, and it would constantly move things i wanted to be stationary or move thinks in a really jerky fashion, oh and if you use a character and it decides to spin them round for absolutely no reason they just morph in the middle of the turn. I liked to conecpt but might go back after it's had some more time to bake.
@@timothywells8589 @timothywells8589 That sounds super frustrating! I was thinking about trying Luma after seeing some hype, but your experience really makes me pause. It seems like it's still early days for these platforms and they need more fine-tuning. Thanks for sharing your thoughts-super helpful! 😊
Once again, timestamps are not difficult to include and are incredibly useful. Enhance your valuable production with beneficial and necessary improvements such as timestamps and all the mentioned links.
I appreciate the comparison of the two leading products. However, the most important thing for me, in the end, is if they are useable or not in a project. In most of the cases you showed, the answer is "no". (To be fair, the percentage rate for Midjourney can be similarly bad in some scenarios. However, it is much faster and cheaper to generate many alternatives.)
I feel they both do things a bit differently, however, Runway has been keeping up with a ton of my prompts. It does help that I am a creative partner I get unlimited prompts, but I have been with them since the beta and the leaps and bounds gen 3 is to gen 2 as far as text to vid is incredible. Now I do miss the img2vid which I hear may be coming in the near future, can't wait for that. Now don't forget they had one of the best vid2vid models in Gen 1 which allowed us to add an image for style transfer, if they add an upgraded version to this, it's going to be hard to compete with them. The closest I've seen to this was Haiper. I like a lot of motion in Luma, but the biggest drawback for me was the quality wasn't always there. I'm sure they'll upgrade that eventually so it's exciting to see the next round of upgraded models drop. For now, Runway has been my jam. Don't forget that once you generate your character inside of Gen 3, you get the option to add lip sync to the video clip. I tested this against Live Portrait and I got some pretty good results in G3. Granted I did only use the HF demo for LP.
it's expensive for an amateur trying to piece together a learning curve and create an actual short video. Many of the Ai Generated content is not fit for purpose or is needing to be tweaked and regenerated. The costs or credit system adds up quick in my personal experience. cool video !! I'm still watching em bro following along
Runway has restrictions on violence, nudity, gore etc - not suitable to compete with block buster action films unless you only use it to generate backgrounds.
You usually mention how censored the models are? edit: PromeAi has uncensored video generation from images as well as a comprehensive image toolset and renewable free credits each month
Great comparison. In your next comparison video of online ai tools/generator, try and include how easy it is, if at all possible, to cancel a monthly/yearly subscription. ai sites that don't allow this should be considered fraudulent and get a very bad score on that aspect.
I cannot agree with you on LUMA. I have generated 4 x videos and all have failed. These were not complexed but not simplified, "See Johnny run with his dog" either. The results are janky, the extended video is a mess and I have yet to try others but I doubt anything will change as they are all still in their infancy.
Hello I am a new subscriber, thank you for all these very interesting videos. I noticed that you hardly ever blinked, I conclude that you must be an AI yourself hahaha thanks for the videos ;)
If they can import your video clips, stitch, cut to smooth seamless transitions and keep timecode so the export can apply to the original 4k library not proxies would be literally the next killer app.
It seems that the comparison is deliberately unfair. LUMA DM performs much worse in text2video mode compared to its own pic2video. In my experience this is true, and I have seen virtually no examples of good text2video work from DM. It is clear that you need to compare not the worst, but the best possible implementation. As for the videos from SORA, they presented exclusively advertising examples, selected from thousands of unsuccessful ones, and even with obvious post-processing. Comparing this with a single and unretouched generation is a waste of time.
I am interested in any of the AIs to work off of my original paintings to create syntheses, so it means that it would have to accept, for example, ten of my images. I wonder which AI can do this?
Don't get me wrong. I love your content. But it's so strange that every time I see a comparison of video generators, Runway is by far the best. And when I try the exact same prompt and settings, I have nightmare material as a result. Morphing characters, terrible backgrounds and inconsistent lighting. Even with Gen-3 I had to re-prompt a lot of times to get acceptable results. Actually, I cancelled my membership last night because I found it too expensive for nothing. 3 of 10 results are actually good or even acceptable.
I wonder when we will see AI Fotoreal in realtime but for videogames. My dream is a AI Filter like Reshade thats upscale textures in realtime in the game so you reach fotoreal Graphics (Like a AI Skin over the Game geometrie). I play Rally Games and i wish they would be Fotoreal. Im sure this is possible with AI but when and how? I saw this in a GTA AI enhanced experience and since then i waited... sry for my bad English. This F1 Video made with Open AI was fantastic too.
Both are pretty much useless in their current states. No consistency, takes too long, and doesn't conform to prompts. They are really cherry-picking here to show anything viable.
Oé, ça commence a devenir pas mal. Mais tous ces programmeurs, qui sont entrain de bousiller plein de métiers avec ça, par ex. les modèles, photographes, cameramans, etc...feraient bien de faire gaffe. Un jour c'est l'IA qui remplacera les programmeurs. Et il ne restera plus que 3-4 trusts d'IA-programmation.
Which do you like BEST?
none of them, AI is the spawn of satan.
Do you have access to Kling? If so, I'd love to see how Kling compares to Runway and Luma. Also, if you have it, can you give us some instructions on how to get access in the US on an Android phone? Thanks!
If you ran each prompt a dozen times and picked the best to compare, would the results turn out differently?
Just FYI, I have a few free Runway credits, but they cannot be used with Gen3. It won't let me even select Gen3 without a paid plan.
Runway for image based prompts, Haiper for text based prompts.
The biggest issue with Runway is the cost. It's very expensive to generate a single 5 second video, and if the video comes out incorrect (which I've had several times) and you have to "re-roll", it becomes far too expensive. Also, the lack of a source image (for now) hurts Runway as it's hard to get a consistent look and characters without a source. Luma is my choice so far, but only due to cost.
For now, for me at the moment it's horses for courses from my experiences on both.
Runway:
Positives:
Greater Realism, Variety, more Natural.
Negatives:
WAY too expensive. Can't choose input images to aid in generations. Simply cannot make the camera go like 'really quick' at all. Can only choose 5 or 10 seconds and that's it...
Luma:
Positives:
Way more value, Way more variety I found - I can make it do warp speed really well, Runway wont move fast no matter what you type. Lots of flexibility for unlimited video lengths with start and end frames.
Negatives:
Auto-prompt vs not is still hit and miss no matter how well you can write a prompt. Generations are a bit slow. Video quality is markedly worse in comparison.
Sora is still miles ahead and remember, we got those samples like last year? I'm sure it's even further ahead now, but stoked to see what Best-in-Class Midjourney will do!
I have used Luma - it's fun. It still has some way to go - I used seed images for some interesting and humorous results. For example, I put in a picture of my nephew and wanted Luma to create a video of him putting on sunglasses. Luma gave him three arms to help him put on his sunglasses :) It's just a matter of time before these video generators become seamless. Looking forward to unleashing my creativity
Runway's image to video (Gen 2) can work like Gen 3 but you'd have to include the word "character" if you have one in the scene and "forest" or "this forest" if you want to keep a forest picture the same otherwise it will overwrite your image. By including these words and hyper realistic etc it will extend your existing scene.
I want to know how these AI tools perform with *‘Image to Video’.* I don’t really care about *‘Text to Video’* because I don’t have much control over the scene with that.
the only one is LUMA. You can set a starting image and also an ending image.
Luma allows Image to Video and tends to have better results as the user has more control over the scene. I have uploaded a few movie trailers created with Luma Labs. On my channel. Thanks
@@duavoai i made whole ai fantasy story with luma lol,
Then you want Kling. It currently kicks the butt of every other image to video generator out there.
@@cbnewham5633 Isn't Kling only for chinese users'
I believe Luma is better than Runway for several reasons. First, Luma generates better physics, allowing for a more realistic portrayal of the world and its characters. Additionally, Luma creates more fluid and organic character movements, giving the creations a sense of life and naturality. While it sometimes produces rather absurd results, I think this is just the beginning. In the future, Luma has immense potential to improve and become an even more effective tool. 🚀✨
But video quality good in runway than luma..i dont talk about image to video.
"Great comparison, AI Samson! I've been considering both Luma and Runway for my projects and this breakdown is super helpful. For those who have used both, which one do you find more intuitive for beginners? Also, any tips on getting the best results from the tool you prefer would be much appreciated!"
Thanks Dophler! They both have their strengths, I think the transitions and image prompting make Luma a more useful tool. But the pure quality of text-to-vid in Runway is insane
@@aisamsonreal @aisamsonreal That's a great breakdown! 🌟 The way Luma handles transitions is pretty slick, isn't it? And yeah, Runway's text-to-video quality is just mind-blowing! 🚀 Has anyone tried mixing tools for different segments of the same project? Would love to hear creative uses!
@i-Dophler I haven't tried runway yet but I did subscribe to Luma for 1 month. I was very disappointed after nearly 150 of my monthly generations I only got about 4 seconds of actual usable content. I found it really hard to prompt, and it would constantly move things i wanted to be stationary or move thinks in a really jerky fashion, oh and if you use a character and it decides to spin them round for absolutely no reason they just morph in the middle of the turn. I liked to conecpt but might go back after it's had some more time to bake.
@@timothywells8589 @timothywells8589 That sounds super frustrating! I was thinking about trying Luma after seeing some hype, but your experience really makes me pause. It seems like it's still early days for these platforms and they need more fine-tuning. Thanks for sharing your thoughts-super helpful! 😊
Once again, timestamps are not difficult to include and are incredibly useful. Enhance your valuable production with beneficial and necessary improvements such as timestamps and all the mentioned links.
I appreciate the comparison of the two leading products. However, the most important thing for me, in the end, is if they are useable or not in a project. In most of the cases you showed, the answer is "no". (To be fair, the percentage rate for Midjourney can be similarly bad in some scenarios. However, it is much faster and cheaper to generate many alternatives.)
I never get anything useable. I don't know why I would pay, plus it's a lot of time wasted.
100%
I feel they both do things a bit differently, however, Runway has been keeping up with a ton of my prompts. It does help that I am a creative partner I get unlimited prompts, but I have been with them since the beta and the leaps and bounds gen 3 is to gen 2 as far as text to vid is incredible. Now I do miss the img2vid which I hear may be coming in the near future, can't wait for that. Now don't forget they had one of the best vid2vid models in Gen 1 which allowed us to add an image for style transfer, if they add an upgraded version to this, it's going to be hard to compete with them. The closest I've seen to this was Haiper. I like a lot of motion in Luma, but the biggest drawback for me was the quality wasn't always there. I'm sure they'll upgrade that eventually so it's exciting to see the next round of upgraded models drop. For now, Runway has been my jam. Don't forget that once you generate your character inside of Gen 3, you get the option to add lip sync to the video clip. I tested this against Live Portrait and I got some pretty good results in G3. Granted I did only use the HF demo for LP.
it's expensive for an amateur trying to piece together a learning curve and create an actual short video.
Many of the Ai Generated content is not fit for purpose or is needing to be tweaked and regenerated.
The costs or credit system adds up quick in my personal experience. cool video !! I'm still watching em bro following along
Clearly Runway won but some draws Luma won. This was review was a bit biased.
9:22 luma's old man with 6 fingers in his left hand
Runway has restrictions on violence, nudity, gore etc - not suitable to compete with block buster action films unless you only use it to generate backgrounds.
As an ad film maker i do all my storyboards in Runway....its the best till now
Hey how do use runaway for advertising
Can you upload your own pre-AI images?
You usually mention how censored the models are?
edit: PromeAi has uncensored video generation from images as well as a comprehensive image toolset and renewable free credits each month
I hope both Runway and Luma will add a lip-syncing feature, so that we’ll be able to create scenes for AI movies.
Cool comparison ! Thanks a lot !
My pleasure!
Overall Runway seemed to perform better, imo. And if Runway's Gen-3 is Alpha, image what later iterations are going to bring?
Great comparison.
In your next comparison video of online ai tools/generator, try and include how easy it is, if at all possible, to cancel a monthly/yearly subscription. ai sites that don't allow this should be considered fraudulent and get a very bad score on that aspect.
Amazing video! Very interesting. And I loved the new scenery.
Thank you very much!
I cannot agree with you on LUMA. I have generated 4 x videos and all have failed. These were not complexed but not simplified, "See Johnny run with his dog" either. The results are janky, the extended video is a mess and I have yet to try others but I doubt anything will change as they are all still in their infancy.
Luma is the winner, image to image is so important, use Leonardo or Midjourney to Create you composition, bingo can't beat that.
Hello I am a new subscriber, thank you for all these very interesting videos. I noticed that you hardly ever blinked, I conclude that you must be an AI yourself hahaha thanks for the videos ;)
Thanks for the sub! I can confirm, I am 100% computer,
Does any of them offer lip syncing for making short film or music videos?
Runway does
A midjourney sref type function for video would be great
And a cref 🎉
7:01 the puppy lost one leg in Luma.
If they can import your video clips, stitch, cut to smooth seamless transitions and keep timecode so the export can apply to the original 4k library not proxies would be literally the next killer app.
Hey, love the film. I’ve rendered my images in Leonardo, which generator would you recommend to bring these to life and piece these together?
Very interesting, i also use both
Very informative, thanks for sharing 🙂
Glad it was helpful!
Solicitation...fed up! ....Tcho!
great job
Can you please review LTX Studio?
It seems that the comparison is deliberately unfair.
LUMA DM performs much worse in text2video mode compared to its own pic2video. In my experience this is true, and I have seen virtually no examples of good text2video work from DM. It is clear that you need to compare not the worst, but the best possible implementation.
As for the videos from SORA, they presented exclusively advertising examples, selected from thousands of unsuccessful ones, and even with obvious post-processing. Comparing this with a single and unretouched generation is a waste of time.
I am interested in any of the AIs to work off of my original paintings to create syntheses, so it means that it would have to accept, for example, ten of my images. I wonder which AI can do this?
Don't get me wrong. I love your content. But it's so strange that every time I see a comparison of video generators, Runway is by far the best. And when I try the exact same prompt and settings, I have nightmare material as a result. Morphing characters, terrible backgrounds and inconsistent lighting. Even with Gen-3 I had to re-prompt a lot of times to get acceptable results. Actually, I cancelled my membership last night because I found it too expensive for nothing. 3 of 10 results are actually good or even acceptable.
We need an open source or local version made for AI video generations.
Hi there - with video AI - could I morph an item such as a tomato into a logo? Rather than using Blender; Max etc to do this?
luma keyframe transition
@@aisamsonreal Thank you - I tried it, it's not too bad!
I wonder when we will see AI Fotoreal in realtime but for videogames. My dream is a AI Filter like Reshade thats upscale textures in realtime in the game so you reach fotoreal Graphics (Like a AI Skin over the Game geometrie). I play Rally Games and i wish they would be Fotoreal. Im sure this is possible with AI but when and how? I saw this in a GTA AI enhanced experience and since then i waited... sry for my bad English. This F1 Video made with Open AI was fantastic too.
Both are pretty much useless in their current states. No consistency, takes too long, and doesn't conform to prompts. They are really cherry-picking here to show anything viable.
The best AI video for me is Luma Dream Machine. The again, I have zero interest in replicating reality.
I was saying why the video is at the end... Is a Premier😂😂😂
hehe!
Oé, ça commence a devenir pas mal. Mais tous ces programmeurs, qui sont entrain de bousiller plein de métiers avec ça, par ex. les modèles, photographes, cameramans, etc...feraient bien de faire gaffe. Un jour c'est l'IA qui remplacera les programmeurs. Et il ne restera plus que 3-4 trusts d'IA-programmation.
Great :)
Gen 3 does not have image upload, only Gen 2, and it works relatively well, fyi
Hedra is one of the best AI
god damn this thumbnails
this video itself is AI , isn't it?
I know u satire
Go watch another lol
Is all about control prompts
What is a "control prompt"?
The one that is FREE
Isn't that free..because it takes forever to generate. And will cost you to extend while it it's not perfect. You have to redo
Haiper just got an update...
tl;dr: they both lose.
Luma 🎉🎯💯🎖