as a producer and artist i've been using gen3 for real world artistic purposes. My most recent single's music video was created with gen3 and for the first time ever it's actually worth it. I spent about $30 in credits. I couldn't get anything usable out of Luma, Pikalabs, or Gen2 personally. I'll be making more music videos with it if anyone wants to follow along
Well done! I think music videos give artists more control over their visual presentations, it's a great use of AI tools. No music producer can afford to hire filmmakers to create videos for every song they produce, and visualisers have become the default alternative to a traditional "music video".
We can’t rely on the prompt they have publicly it’s probably marketing.. in order to get zoom and panning you have to type ( dolly in/out) in the prompt itself.
Really god point. I think, in time, we will start to see more and more of the bits that have been metaphorically glued and taped together with digital gaffa tape.
Absolutely. The huge drawback with all these tools isn't just the monthly cost, but it's like a gambling addiction. Every generation is a roll of the dice, and the user is the one that carries all the risk of a long, string of failed results. I wouldn't mind the costs so much if the results were a little more guaranteed.
Really appreciated this comparison video, and your excitement for the personal control of video-to-video was enlightening, as I hadn't fully considered the value of that method until now. Great video. :)
Thanks very much. Yep, video to video, watch this space, if it catches on it'll give me far more control over "human" performances than text to video or image to video.
Heyyyyy! Thanks :-) With your Tai Chi channel you should REALLY check out video to AI video tools, I think you'd have some fun with those ;-) If you're up for the experiment, have a play with: - Kaiber.ai - Domoai.app - Haiper.ai - LensGo.ai
Great comparison Haydn! Yeah from the release I'm exclusively using LUMA's Dream Machine until we get Gen3... All the others have to catch on or be left out of the run... Cheers good friend, as always great video!!!
I was a beta tester for Runway in it's early days and I 100% agree with you, we needs a vid2vid model. I am creating a couple of stories, and I want to be able to act out the characters stories and use the ai to bring me into that world and to truly become on of my characters. Runway was the only one true vid2vid imho since the beginning, haiper has brought us the repaint tool and it has been quite impressive in my tests. but I'm waiting on that next level, better facial recognition and animation. i tend to lose a lot of my original performance in some of the other AI tools out there. Runway seems like the closest to this right now.
I agree. I’ve just run the same, six prompts in Kling, and it’s proving an incredibly helpful benchmark series for testing. Hopefully uploading the results by Monday. Still a couple of other tests to run.
@@HaydnRushworth-Filmmaker I recently ran a lot of generations from image input in gen-2 instead of gen-3 and got pretty good results, actually. Made the whole music video with it :)
Thanks very much :-) In fairness, I should have ended on the same point. You’re right, at this stage Gen-3 is still one step away from being a glorified White Paper
I tried out Gen3 today, but unfortunately it didn't come out as I had imagined. It's a shame, but I think they still need a bit of training. In any case, Gen3 is very fast. With Luma you need a lot of patience.
Yeah, they definitely still have their upsides and downsides. The real risk of amazing marketing shots is that our expectations are so sky-high that when the tool finally arrives for general release, we're all left deflated and disappointed.
nice video! its crazy just how advanced runway gen 3 seems to be! i hope they release it soon! kinda infuriating they didnt give us an actual date lol luma def right now seems to be the "best" video tool out until gen3 tho lol
I shot this video before Gen3 was released, so I wasn’t able to do anything actually within Runway at that point, which is why the only comparison I could do was using the existing prompts in other tools. It would be a really interesting test now to see if the same prompts returned similar results.
That's hilarious!! I had to look him up. I guess without realising it I'm merging into the territory of generic, middle-aged, grey haired white guy. I think I need to line myself up some auditions for old wizards! :-)
But Gen-3 isn't here yet as well. We only have marketing material. Remember Davon? Where is ist after the huge hype. We should be much more carefull with these AI related announcements. Lets wait until it's really here and then talk about it.
Absolutely. It's so easy to get sucked into the hype around white papers and test footage, but I agree, they're not usable tools until they're released. That said, I think it's important to keep one eye on the horizon for an early sign of what may be coming.
great comparison... runway will be better then luma, they have great model for any style and control of all parameters.... luma love lots of camera movement but have no control over the camera and movement parameters, hope they fix it soon
@@HaydnRushworth-Filmmaker Runway has zooming, panning, or rotating sliders... I haven't used them much because whenever I've tried it the results were berzerk. But it's supposed to work.
When I finished editing this video I could feel it in my bones that we'd end up with access to Gen-3 sooner rather than later. Have you used it? So far it seems to have some incredible upsides, coupled with some costly stings in the tail.
I ran ‘an astronaut running through an alley in Rio de Janeiro’ through Runway 3 times and none anywhere as useable as that one showcased. Man, so tired of so many Ai image creators showcasing best of best as if those results are standard. Never even close when I run.
Let's see. The samples look impressive, but I think we're all starting to get used to the impressive first results being a little less impressive when we try to use them in real life.
Absolutely. Have you seen this video yet? I've started using Domo.ai for video to video because the results are really impressive (at least, as impressive as anything AI ever is at this point). ua-cam.com/video/mWimvugAmNg/v-deo.html
@@HaydnRushworth-Filmmaker yeah i mean its clearly unusable still. but as soon as the nvidia blackwell cards are in everyones hands 8x the compute from those for AI. Then video will be feasible.
I can't wait for the entire movie industry to be replaced by AI models that will create whatever I want, whenever I want, and with much better writing, "acting", and quality.
That's certainly the perspective that terrifies lots of actors and writers, but in my experience so far with AI tools, we're looking at a future where Hollywood, actors, writers, directors, filmmakers etc are ENHANCED by AI tools rather than REPLACED by them. That said, I think AI should allow content to be created for smaller, niche audiences, so that we all get to see more of what "we" want, rather than what Hollywood wants to present to us.
We might never see Sora at all, at least for now, because they have a huge IP problem and you can’t copyright the end results. I’m explaining this to you because you’re from the industry and unlike most of UA-camrs who don’t understand how the film industry works, you do understand it. I’ve been paid several times by several AI companies who were training their models on footage I have on Pond5, Getty and Shutterstock, one of them was openAI, however, I never gave them permission to use our footage (awe just received a letter with a long explanation and nothing more), so imagine if they did this with a mega corporation…..like UA-cam…..that’s why it is not being released, and I think lately there’s been a wokeness about IP and copyright matters. People don’t understand that to own copyright you must’ve done or contributed at least 70% of the final product, and prompting a shot, a full scene or an entire film is less than 1% of the end result. However, with img2img, things even up to 50%, which is also not enough to get your copyright papers, and be able to license your film in the markets. This is why OpenAI failed when they came to LA to show us Sora. We can’t use tools that can’t produce footage not acceptable by the copyright office. We had a product that we presented last year, and it took 6 months for the copyright office in LA to study our case and finally decline us the copyright…..this is serious sh*t, but because AI is handled by amateurs and UA-camrs who never worked in the industry, nobody seems to care it, for example, you can’t use songs made by AI because you can’t copyright them. ASCAP is becoming increasingly annoying by addressing this over and over, and they are giving free support for musicians to understand this problem, and it will continue this way until either the Supreme Court, or the Congress passes a bill outlining with extreme details the commercial usage of AI….Typos by iOS.
That's an incredibly important insight! You're the first and only person I've come across who has actually received royalties from an AI company for Stock Library footage. You're absolutely right about the copyright issue, it's not at all widely known, and I didn't realise Open AI was received badly in Hollywood recently when they went to charm industry heavyweights with the potential of Sora. It's an incredible challenge for me trying to move my film project forward as a creative, whilst also researching and staying on top of AI news at the same time, so I haven't kept an eye on, say, Hollywood Reporter for updates about the Sora meetings. Really appreciate your perspective.
So, where does that leave video-to-vdeo? Presumably the copyright is a little different if you have human actors creating movement performance and voiceover performance?
I uploaded this video days before Gen3 was released to the public and only had the sample prompts to work from. I’ve realised I’m going to need to start dating videos so people have an idea about whether the information is out of date or not. It’s one of the challenges of an industry that is changing on almost a weekly basis.
I'm afraid I can't comment on poking pigs, but I agree entirely, Gen-3 (which I've started using since I made this video) is extremely risky and expensive to use. Presumably Runway will introduce some kind of preview at some point, but you're right, at the moment it stings to use Gen-3.
Did you actually try those prompts again on Runway, instead of using Runways example of the prompt? I did, and no way could I get Runway to replicate its own example. Runway made beautiful rubbish. It does not prompt well. Try it.
I would imagine that makes lip-syncing very easy, for example. You could model angles you wanted for shots quite easily. Could have a lot of practical applications.
Video to AI Video has the potential for authentic, human movement and the ability to direct human actors. Check out this video where I compare several video to video tools using a test scene that I shot with a human actress: ua-cam.com/video/FpRDLyXc7ZY/v-deo.html
Absolutely. I think video to video will always be more "human" and realistic. I think it has enormous potential shooting music videos where you can enhance the performers rather than try to replace them.
DO NOT listen this old man.... I THOUGHT he was saying the truth but it a LIE, RUNWAY GEN3 is shit, and lots of people for other videos think the same, the "image to video" is only zoom and got crappy results .. you almost use all your credits to maybe get only one useful video... but the others 20 are wasted.
In fairness, it really does look cool, overall. Of course, I was right, since I uploaded this video, days later it was released to the rest of us and even through the results are mixed, in balance, they’re still impressive.
as a producer and artist i've been using gen3 for real world artistic purposes. My most recent single's music video was created with gen3 and for the first time ever it's actually worth it. I spent about $30 in credits. I couldn't get anything usable out of Luma, Pikalabs, or Gen2 personally. I'll be making more music videos with it if anyone wants to follow along
Well done! I think music videos give artists more control over their visual presentations, it's a great use of AI tools. No music producer can afford to hire filmmakers to create videos for every song they produce, and visualisers have become the default alternative to a traditional "music video".
We can’t rely on the prompt they have publicly it’s probably marketing.. in order to get zoom and panning you have to type ( dolly in/out) in the prompt itself.
Really god point. I think, in time, we will start to see more and more of the bits that have been metaphorically glued and taped together with digital gaffa tape.
Hopefully, Runway Gen-3 can deliver results that justify the subscription cost
Absolutely. The huge drawback with all these tools isn't just the monthly cost, but it's like a gambling addiction. Every generation is a roll of the dice, and the user is the one that carries all the risk of a long, string of failed results. I wouldn't mind the costs so much if the results were a little more guaranteed.
You made exactly the review I wanted to watch. thanks.
Heyyyy! Thanks very much :-) glad to be of service :-)
00:40 So spot on about Sora still in the fitting room 😂
I confess, I had fun whipping up those images 😁😁
Really appreciated this comparison video, and your excitement for the personal control of video-to-video was enlightening, as I hadn't fully considered the value of that method until now. Great video. :)
Thanks very much. Yep, video to video, watch this space, if it catches on it'll give me far more control over "human" performances than text to video or image to video.
Just this morning I was wishing for just this comparison- and then bang! There you are. Thank you!
Heyyyyy! Thanks :-)
With your Tai Chi channel you should REALLY check out video to AI video tools, I think you'd have some fun with those ;-)
If you're up for the experiment, have a play with:
- Kaiber.ai
- Domoai.app
- Haiper.ai
- LensGo.ai
Thank you - this is what I was looking for...the comparison of the programs!
Thanks very much. It got my curiosity going, so I hoped there’d be others out there wondering the same things as me :-)
Remember these vidoes used for runway gen-3 is the best ones cherrypicked by runway themselves.
Great video analysis, wonderful job Haydn! Thanks.
Hey @sandro-nigris thanks very much. Really appreciate it 😀😀
Fantastic video, exactly what I was looking for!! Subscribed!
Heyyyy, thanks very much indeed 😁😁😁
Runway still the best & smooth for video.
Thanks. Do you have any links to anything you've created with Runway?
Great comparison Haydn! Yeah from the release I'm exclusively using LUMA's Dream Machine until we get Gen3... All the others have to catch on or be left out of the run... Cheers good friend, as always great video!!!
Cheers, bud! Hey, have you signed up for a paid plan or are you winging it with the freebie version?
@@HaydnRushworth-Filmmaker I'm in the pro plan since the start, its the only way I can keep up with a video every week and shorts almost everyday...
@@BinaryFrameProductions
On the PRO tier, How long does DreamMachine take to render 1 clip?
@@alwilliams8196 really fast… 2 or 3 minutes
Nice video, there is also the Chinese Kling AI which seems close to the Runway Alpha 3 League track, perhaps just a few steps below
Do you know where we can find kling users' videos?
You're not the first person to correctly point out that Kling should have had a specific name-drop shoutout in this video :-)
I was a beta tester for Runway in it's early days and I 100% agree with you, we needs a vid2vid model. I am creating a couple of stories, and I want to be able to act out the characters stories and use the ai to bring me into that world and to truly become on of my characters. Runway was the only one true vid2vid imho since the beginning, haiper has brought us the repaint tool and it has been quite impressive in my tests. but I'm waiting on that next level, better facial recognition and animation. i tend to lose a lot of my original performance in some of the other AI tools out there. Runway seems like the closest to this right now.
It sounds like you and I are aiming for exactly the same thing. I share the same frustrations with trying to get the same results as you.
I agree, that video to video is the real challenge that will help filmmakers!
Absolutely, that's the arena that I'm really hoping to see more of.
Great review!! Thanks. Runway is a clear winner. Would be great if they added the future of input initial image like Luma do.
I agree. I’ve just run the same, six prompts in Kling, and it’s proving an incredibly helpful benchmark series for testing. Hopefully uploading the results by Monday. Still a couple of other tests to run.
@@HaydnRushworth-Filmmaker I recently ran a lot of generations from image input in gen-2 instead of gen-3 and got pretty good results, actually. Made the whole music video with it :)
Great comparison video 🍻
Until Gen 3 proper appears Luma wears the crown 👑
Thanks very much :-)
In fairness, I should have ended on the same point. You’re right, at this stage Gen-3 is still one step away from being a glorified White Paper
Great thoughts and insights on the differences between the generators!
Thanks, Mike 😁
Why doesn't anyone talk about the video resolution? Is it 4K or what?
Isn't the most important thing here is accuracy?
Exactly!! I’ve been editing my next video and saying exactly the same thing.
Great comparison video. Thanks for sharing! 💫
Hey, Tasha, thanks very much indeed :-)
I tried out Gen3 today, but unfortunately it didn't come out as I had imagined.
It's a shame, but I think they still need a bit of training.
In any case, Gen3 is very fast.
With Luma you need a lot of patience.
Yeah, they definitely still have their upsides and downsides. The real risk of amazing marketing shots is that our expectations are so sky-high that when the tool finally arrives for general release, we're all left deflated and disappointed.
Haiper ai makes anime with stunning cinematic view
Thanks for the tip, I'll go check that out. I do love Haiper.ai, but I've been looking at photorealism rather than anime. I'll check it out :-)
Really amazing video 👍. I am your new subsciber datasciency.
Thanks very much @datasciency, glad to have you along for the journey :-)
nice video! its crazy just how advanced runway gen 3 seems to be! i hope they release it soon! kinda infuriating they didnt give us an actual date lol luma def right now seems to be the "best" video tool out until gen3 tho lol
Agree entirely. For now, an AI bird in the hand (Luma) is definitely worth two or more AI birds in the bush (Sora, Kling, Runway Gen-3)
Nice Comparison and thank god the video is not unnecessarily stretched to 15 min. subbed 👍
Thanks for the Sub and thanks for noticing. It's a deliberate effort to get to the point and get there quickly :-)
great video, love the variety
Thanks very much :-)
You were supposed to regenerate the Runway prompts
I shot this video before Gen3 was released, so I wasn’t able to do anything actually within Runway at that point, which is why the only comparison I could do was using the existing prompts in other tools. It would be a really interesting test now to see if the same prompts returned similar results.
Runway Gen3 hasn't "come out of the fitting room" either.
EXACTLY! Touche 😂😂😂
Awesome video!! Which one of these are free???
They each have different "free" versions, and there are upsides and downsides to each option. All of them worth playing about with though.
You look like Joseph Lawrence from The Handmaid's Tale
That's hilarious!! I had to look him up. I guess without realising it I'm merging into the territory of generic, middle-aged, grey haired white guy. I think I need to line myself up some auditions for old wizards! :-)
But Gen-3 isn't here yet as well. We only have marketing material. Remember Davon? Where is ist after the huge hype. We should be much more carefull with these AI related announcements. Lets wait until it's really here and then talk about it.
Absolutely. It's so easy to get sucked into the hype around white papers and test footage, but I agree, they're not usable tools until they're released. That said, I think it's important to keep one eye on the horizon for an early sign of what may be coming.
great comparison... runway will be better then luma, they have great model for any style and control of all parameters.... luma love lots of camera movement but have no control over the camera and movement parameters, hope they fix it soon
True. Sometimes Runway's motion brush is the only way I can get a decent animation that doesn't go overboard.
I'd LOVE a tool that gives me guaranteed camera movements, but that would probably require a gaming engine, enhanced with AI.
@@HaydnRushworth-Filmmaker Runway has zooming, panning, or rotating sliders... I haven't used them much because whenever I've tried it the results were berzerk. But it's supposed to work.
❤ Hare Krišna 🍀🍂🧡🎉
Gen 3 is out now!!!
When I finished editing this video I could feel it in my bones that we'd end up with access to Gen-3 sooner rather than later. Have you used it? So far it seems to have some incredible upsides, coupled with some costly stings in the tail.
I ran ‘an astronaut running through an alley in Rio de Janeiro’ through Runway 3 times and none anywhere as useable as that one showcased. Man, so tired of so many Ai image creators showcasing best of best as if those results are standard. Never even close when I run.
They be cherry picking which is pissing me off
Agree entirely 100%. AI is still like an impressive wild horse that looks amazing but is incredibly difficult to control.
I suspect you're right.
Kling is coming to beat them all like midjourney
Let's see. The samples look impressive, but I think we're all starting to get used to the impressive first results being a little less impressive when we try to use them in real life.
yeah agree wheres video to video? AI doesnt understand text well so we need video to video
Absolutely. Have you seen this video yet? I've started using Domo.ai for video to video because the results are really impressive (at least, as impressive as anything AI ever is at this point).
ua-cam.com/video/mWimvugAmNg/v-deo.html
@@HaydnRushworth-Filmmaker yeah i mean its clearly unusable still. but as soon as the nvidia blackwell cards are in everyones hands 8x the compute from those for AI. Then video will be feasible.
Until they publish GEN3 and SORA this is all just empty promises in my book. Why not let us try it if it's so good.
I agree. At least LUMA is allowing people to get hands-on.
This good for short music video 😊😊😊
Couldn't agree more. If you use it carefully and imaginatively, you could really bring a whole new level of dazzle to a music video!
I can't wait for the entire movie industry to be replaced by AI models that will create whatever I want, whenever I want, and with much better writing, "acting", and quality.
That's certainly the perspective that terrifies lots of actors and writers, but in my experience so far with AI tools, we're looking at a future where Hollywood, actors, writers, directors, filmmakers etc are ENHANCED by AI tools rather than REPLACED by them. That said, I think AI should allow content to be created for smaller, niche audiences, so that we all get to see more of what "we" want, rather than what Hollywood wants to present to us.
I don't
And Kling?
I should have given one of the AI King robots a "Kling" badge too because they're definitely another potential "Sora-killer"
Kaiber doesnt have text to video generator
We may be using different versions of Kaiber to each other. All the examples from Kaiber in my video were created with text to video.
Any idea when Runway Gen 3 will be available?
Haven’t heard anything on the grapevine yet.
We might never see Sora at all, at least for now, because they have a huge IP problem and you can’t copyright the end results. I’m explaining this to you because you’re from the industry and unlike most of UA-camrs who don’t understand how the film industry works, you do understand it. I’ve been paid several times by several AI companies who were training their models on footage I have on Pond5, Getty and Shutterstock, one of them was openAI, however, I never gave them permission to use our footage (awe just received a letter with a long explanation and nothing more), so imagine if they did this with a mega corporation…..like UA-cam…..that’s why it is not being released, and I think lately there’s been a wokeness about IP and copyright matters. People don’t understand that to own copyright you must’ve done or contributed at least 70% of the final product, and prompting a shot, a full scene or an entire film is less than 1% of the end result. However, with img2img, things even up to 50%, which is also not enough to get your copyright papers, and be able to license your film in the markets. This is why OpenAI failed when they came to LA to show us Sora. We can’t use tools that can’t produce footage not acceptable by the copyright office. We had a product that we presented last year, and it took 6 months for the copyright office in LA to study our case and finally decline us the copyright…..this is serious sh*t, but because AI is handled by amateurs and UA-camrs who never worked in the industry, nobody seems to care it, for example, you can’t use songs made by AI because you can’t copyright them. ASCAP is becoming increasingly annoying by addressing this over and over, and they are giving free support for musicians to understand this problem, and it will continue this way until either the Supreme Court, or the Congress passes a bill outlining with extreme details the commercial usage of AI….Typos by iOS.
That's an incredibly important insight! You're the first and only person I've come across who has actually received royalties from an AI company for Stock Library footage. You're absolutely right about the copyright issue, it's not at all widely known, and I didn't realise Open AI was received badly in Hollywood recently when they went to charm industry heavyweights with the potential of Sora. It's an incredible challenge for me trying to move my film project forward as a creative, whilst also researching and staying on top of AI news at the same time, so I haven't kept an eye on, say, Hollywood Reporter for updates about the Sora meetings. Really appreciate your perspective.
So, where does that leave video-to-vdeo? Presumably the copyright is a little different if you have human actors creating movement performance and voiceover performance?
I was expecting you to do the bare minimum and actually try these products before comparing them.
I uploaded this video days before Gen3 was released to the public and only had the sample prompts to work from. I’ve realised I’m going to need to start dating videos so people have an idea about whether the information is out of date or not. It’s one of the challenges of an industry that is changing on almost a weekly basis.
@@HaydnRushworth-Filmmaker Alright, apologies.
A kinda that the top one is Coke and the bottom ones Pepsi. 🤔
Brilliant analogy 😂
Thompson Robert Garcia Kenneth Harris Eric
In Luma, you can see the generated model before you buy, but ranway doesn't have this feature. So I don't want to buy this pig in a poke.
I'm afraid I can't comment on poking pigs, but I agree entirely, Gen-3 (which I've started using since I made this video) is extremely risky and expensive to use. Presumably Runway will introduce some kind of preview at some point, but you're right, at the moment it stings to use Gen-3.
It's Twitter, not X
Tbh I rarely use it any more, but I can tell that you're struggling with the avalanche of content calling it X.
@@HaydnRushworth-Filmmaker The only thing I struggle with is understanding why any sane person would still call it X.
LUMA gives me better results every time
That seems to be a growing trend. I keep seeing really impressive results from LUMA
Did you actually try those prompts again on Runway, instead of using Runways example of the prompt? I did, and no way could I get Runway to replicate its own example. Runway made beautiful rubbish. It does not prompt well. Try it.
Krea AI ,dreamina ai, kling ai, sora ai
Thanks, I’ll add them to the list 😁
Why do you wnat vidoe to video?
I would imagine that makes lip-syncing very easy, for example. You could model angles you wanted for shots quite easily. Could have a lot of practical applications.
Video to AI Video has the potential for authentic, human movement and the ability to direct human actors. Check out this video where I compare several video to video tools using a test scene that I shot with a human actress:
ua-cam.com/video/FpRDLyXc7ZY/v-deo.html
Absolutely. I think video to video will always be more "human" and realistic. I think it has enormous potential shooting music videos where you can enhance the performers rather than try to replace them.
DO NOT listen this old man.... I THOUGHT he was saying the truth but it a LIE, RUNWAY GEN3 is shit, and lots of people for other videos think the same, the "image to video" is only zoom and got crappy results .. you almost use all your credits to maybe get only one useful video... but the others 20 are wasted.
I agree with you, actually. I don’t think Runway gives the best value for money at the moment.
Runway really looks cool
In fairness, it really does look cool, overall. Of course, I was right, since I uploaded this video, days later it was released to the rest of us and even through the results are mixed, in balance, they’re still impressive.
@@HaydnRushworth-Filmmaker I enjoyed using Mid-Journey and Night Cafe for image generation. I will definitely try out Luma and Runway