This is one of the best tutorials I've seen for generative AI video. You are correct in that there doesn't seem to be anyone with any narrative storytelling experience training these things. I caught hell trying to get and OTS shot from all of these models! You've seemed to have cracked the code. I've said several times, I'm not impressed that AI can create crazy surreal images, I need normal images. I don't need an Teddy bear and a cate wearing top hats eating noodles, I need a person walking into a room and sitting down at a table.
EXACTLY!!!!! Anybody who has read a real screenplay or evan paid attention whilst watching ANY movie or TV show could see there's a bucketload of "mundane" that glues everything else together. Oh, hey, and thanks for the compliment :-)
We are leaving the phase of miracles and entering the phase of maturation and consolidation of these tools little by little. The fact that there is no minimum basic set of tools expected in these video templates shows this need. Every tool should come out with style reference, character reference, inpaint and so on... By the way I didn't see anyone on youtube listing this kind of thing either.@@HaydnRushworth-Filmmaker
what a magic This tutorial is fantastic for filmmakers. Please make more tutorial videos like this to make the videos more accurate and better, we need them
I like AI video generation more and more. I think in 2 or 3 years each of us will be able to create incredible videos and share our creativity with others)
thanks for making this. I'm an aspiring filmmaker from INDIA and your videos cover a topic with such nuance and the right questions. always looking forward to the content made by you
Heyyyyyy!!! Thanks very much indeed 😁😁😁 I often wrestle with the dilemma of how to connect with narrative filmmakers and screenwriters who are interested in exploring AI as an up and coming production tool, so I really appreciate the feedback. It’s great to hear that I’m managing to connect with the intended audience 😁
Character consistency is absolutely critical, but it’s only the top of the iceberg. The real goal we should all be discussing is CONTINUITY, and that’s a much bigger challenge.
Just stumbled across your channel and feel quite inspired by what you have shared. I made a music video (well my brother shot and edited it to be honest) last year to a song I'd written, but this year I find that my budget is quite restricted, so I wanted to see if I could do it using AI, and after watching your video's I feel that its totally possible. I signed up to a course on UDEMY to learn Midjourney, and I'm going to learn about Runway next hopefully. The big barrier for me regarding being able to make a video, would be lip sync, and being able to match lip movements to the song I'm recording. The more I can learn about that side of things, then the better equipped I will be when I finally (at some stage) start to put something together. Keep up the great work !
BTW, perhaps keep a list or spreadsheet which details all of the things you want to be able to do (lip synch, camera position, consistent characters, etc) and indicate whether it's currently possible and which tech to use. That would be super useful if you ended each of your videos with the current update of the list.
Thanks very much, I think you’re absolutely right, so I actually started work on that list this week. I did have something like that in mind, actually, but you helped get my focus back to it, so thank you, I really appreciate the nudge :-)
@@HaydnRushworth-Filmmaker more still trying to feel the tools out to see what my guardrails are. I have a lot of old D&D adventures I’d like to bring to life. Baby steps. Can’t believe I didn’t think about reading the docs…
Cosplayers? You guys have the AI world at your feet and you should absolutely embrace it!!!! I’m a big fan of the idea that the best results will always end up being when you use AI tools to enhance human performances rather than replace them.
I tried using Runway. While I liked the visual results, the cost per attempt was a bit daunting. Plus, I still haven't found a way to generate consistent characters needed to produce a movie with Runway. After dabbling with a half dozen AI video packages during the past couple of months, the two that I used to create 4 shorts are LTX Studio and Vidu. I use LTX to develop the characters and dialog, along with Vidu to provide some visual punch at a price that doesn't break the bank. While the results aren't yet perfect, at least I can tell a complete story that doesn't take weeks to craft.
Absolutely love your approach. 100% endorse taking that kind of route. Well done for getting to the point of completed projects. Further down the line there’s going to be a time and place for spending bigger money for better tools, but for now, workflow productivity is vital.
@@HaydnRushworth-Filmmaker It may not be perfect, but the technology is good enough to craft some riveting yarns. I've been in contact with the crew at LTX and they've been taking my feedback seriously. As you pointed out in your video, what the developers of AI video products need is advice from filmmakers. I'll keep you posted.
Hey, not at all, hope it works for you too. Like everything else in AI, nothing is guaranteed to work every time, so the best we can hope for is to improve the chances of success.
Congratulations! You've cracked a couple of things and pushed the tech a little further. Still a long way to go, but that's impressive. I haven't used things as extensively as you and was not aware of the lip synch capabilities - although only a couple of days ago I too tried writing text for a character to say... and like you, found it doesn't really work as yet
This is why im not starting official production of my movie series yet. Im waiting fot the tools to advance from their current state and thankfully... Weve seen good progress past few months. Its starting to kook promising
It really is looking promising, but my worry is whether the AI development companies will settle for a customer base of advert-creators and social content creators instead of creating tools for long-form content creators. I’m planning the next video and looking at this whole subject.
Thanks very much. As it happens, I think Runway is terrific. I'd say, between Kling, Luma and Gen-3, at the moment, Runway is the better overall quality, but in terms of usability and value for money, Kling is definitely King (for now). Runway is great, but incredibly expensive if you're still exploring AI. If you're creating AI content professionally, and a client is paying the bills, the Runway's unlimited plan would be ideal. For me, for now, I'm touch and go about paying for Runway. I just cancelled it, and then re-started the plan again when image to video came out.
When real-time generation is achieved (and I don't think it will be much longer before this is possible) and every character is able to be imbued with a set of character traits, it should theoretically be possible to give more general direction to generated characters. They'll 'act like actors' taking direction in real-time.
I'm right there with you on this. I'm a great believer in the potential of video-to-video AI as the most feasible way to achieve the most authentic, human-like performances possible.
I literally found this video AS I was struggling with getting Runway to do this exact shot. It kept rotating the camera and showing the woman's VERY off-model face. Still struggling with it but fingers crossed!
Fantastic, glad it helped. As an extra tip, for the reverse view (in case it crops up for you), I found I needed to specify the background of the location in the prompt, otherwise both characters ended up with the same skyline balcony view behind them. Hand in there, good luck, and if you figure out any handy hints whilst you're tackling the shot, drop it here in the comments :-)
@@HaydnRushworth-Filmmaker Thanks for the vote of confidence! Amazingly writing "-neg camera movement" (without quotes) helped stabilize the camera. Also using Runway Gen-3's example prompts, I noticed the phrase "The camera trails behind them" and while that still had movement, it did not completely reveal the face so it was still usable.
Absolutely. My best workaround so far had been to pick two, public figures and use them in the prompt with the words “blended with” in between. “Margot Robbie blended with Kate Beckinsale, age 23, long blonde highlights hair” I find that gives me reasonably consistent results and avoids directly copying one person’s face. It’s a method that works reasonably well with most tools, except for the ones that don’t allow celebrity names in the prompt.
I'd forgotten about Hedra tbh, and since you mentioned it I realised I still had the Hedra website open in my "Next video research" tab group. Thanks for the reminder, it's a great looking tool.
@@HaydnRushworth-Filmmaker Yeah, I haven't used it but it does look cool. I uploaded an image that was too cartoony and it couldn't detect a face. (It was a realistic 3D render of a simple jelly with eyes and a mouth.) I then contacted via Discord and they replied saying it is something they are looking at in future - to support different art styles - but for now it looks great for real faces and realistic imagery. BTW, really appreciating your prompting tips and experiments to see what gets the best results.
I agree entirely. On one of their training pages they say don’t be afraid to experiment… easy for them to say when they make money from our experiments 😁😆
I think at this point, you have the ultimate pre-viz/ storyboard tool. Why submit a paper script to a producer when you can send a full rough draft of the movie? All narrative aspects of the movie can be worked out for time and story coherence before a dime is ever dropped on production. No need for expensive reshoots after it is screened for executives or limited test audiences.
I think you're absolutely right that the strongest argument for generative AI in its current state is as a selling tool or a pre-viz tool. There's definitely a real need for seasoned professionals to try to use these tools and get feedback to the companies who build them, otherwise they'll remain novelty services for content creators until the novelty wears off, and then progress will grind to a halt.
It's an ongoing challenge for all of us. Trying to find the sweet spot between investing in your own skills development versus pouring money down the drain. I think we're all having to get used to the idea that we need to pay monthly fees of some kind or another to AI service providers, but the real trick is figuring out which ones are most valuable for you.
Yes, the indeed would be brilliant. I've been using Runway lip-sync for a couple months or more. I'm lucky if it will work at all if a scene isn't fairly straight on and there's much at all going on in the video. It works better going from image to lyp-sync, but then you have someone's mouth moving and not much else. I can't say I've ever got it to work to my satisfaction. It's always been clunky and hard to integrat. My [shadow-banned] "Pulp Fiction" used it rather extensively. There's some hope with Hedra giving more realistic results. But you got one of the best results I've seen. Ah, and until now, if you had two people in the same shot, they'd both mouth all the same words. Hot tip. Kling has the best quality animations for the buck, by far, though I believe their half off sale ends in a number of hours. My Kling animations (if I used the 35 credit "professional" option) stand out against Runway and Luma. Haiper is no longer able to compete. New kid on the block "VIDU" does the fastest motion I've seen, but at a miserable 360p.
Thanks for the heads-up re VIDU, good to know. Agree entirely about the wooden performances from most AI lip image to lip sync videos, and yes, I've had trouble with two characters mouthing the same lines :-) Still experimenting, but I'm hoping this "traditional" over-the-shoulder shot will make it easier to get better results because it focuses on just one person by default.
its not perfect yet but give it just a mere few years if less then that and we will be able to create fully controllable scenes with better lip sync and direct camera control and all will be amazing!
Here is how I feel about Runway Gen-3...we are literally paying them to train their machines. We shouldn't have to pay for some of the horrible results they give back. It's robbery.
That’s a really valid point. I hadn’t thought about that. Tbh one thing I really like about Midjourney is the opportunity to get a free hour of processing time whenever I spend 10-15 mins rating images. It would be great if Runway offered the same feature.
Sir can you Suggest some people who do a lot of experimenting with ai video tools and post it on the internet, I want to watch their video They are best to get knowledge for me as they have already used this tool so much that they know a lot of thing they can do, Can you Suggest some people to know ?
I think you've already landed in a great place watching this video :-) In reality, as soon as you begin to watch any video on generative AI, UA-cam should start to present more for you to watch. I'd just follow your curiosity, or search for specific subjects. For a good start, though, check out Tao Prompts: www.youtube.com/@taoprompts
What u waiting for ?? Making your dream movie man!?!! Keep gone Or u will waiting Sora??? Or 20 minutes movie generate?😊 I’m waiting to sea what in your head movie,,,…. peace from another side of the world 🎉
Viewer from the future. If you think this is amazing wait until you see what you can do in two months with a LORA trained or Pulid Flux character generated image and Runway's Act One.
@@HaydnRushworth-Filmmaker I suspect that mocap action performances will be viable a la Act One Plus so to speak and I suspect you will be able to divide facial and body performance soon too. Likely you will be able to assign performance to a character and it will all be image based like in traditional CGI. I mean CGI artists know the tech they want and it's the same tech filmmakers want. Of course like in AI image generation it will probably advance to even more photorealistic and maybe in a year or two backgrounds will not have the errors they have now.
You are being paid for this, aren't you? Anyway, I'd much prefer this technology for storyboarding only. Not for commercial use because the image quality is really bad. It looks like a bad video game cutscene from 2005 if you ask me. Plus, the fact that this whole Ai video generation is built on stolen content is beyond horrible and will be restricted by courts soon. Please tell your ai overlords to stop making these weird ass softwares and tell them to make a better version of Photoshop or Adobe Premiere cause that stuff is actually needed in the market.
As it happens, Runway didn’t pay me to share handy hints on getting better results for conversational over the shoulder shots, but that would definitely be something I’d be up for. In fact, I’d really love to have a conversation with somebody in their development or management team because I have a bucketload of questions for them. It’s true that image quality isn’t great with AI, but that’ll get better. But the biggest issues I have with generative AI is the lack of real character control, along with all the continuity challenges. In terms of photoshop improvements, I’m afraid I have no contact with Adobe at all (Adobe makes photoshop as well as Premiere Pro), so I’m afraid I can’t pass that part of the message along, but I’m sure they’d love your feedback.
This is one of the best tutorials I've seen for generative AI video. You are correct in that there doesn't seem to be anyone with any narrative storytelling experience training these things. I caught hell trying to get and OTS shot from all of these models! You've seemed to have cracked the code. I've said several times, I'm not impressed that AI can create crazy surreal images, I need normal images. I don't need an Teddy bear and a cate wearing top hats eating noodles, I need a person walking into a room and sitting down at a table.
EXACTLY!!!!! Anybody who has read a real screenplay or evan paid attention whilst watching ANY movie or TV show could see there's a bucketload of "mundane" that glues everything else together.
Oh, hey, and thanks for the compliment :-)
We are leaving the phase of miracles and entering the phase of maturation and consolidation of these tools little by little. The fact that there is no minimum basic set of tools expected in these video templates shows this need. Every tool should come out with style reference, character reference, inpaint and so on... By the way I didn't see anyone on youtube listing this kind of thing either.@@HaydnRushworth-Filmmaker
what a magic This tutorial is fantastic for filmmakers. Please make more tutorial videos like this to make the videos more accurate and better, we need them
Hey! Thanks very much for the encouragement, I really appreciate the positive feedback 😁
I like AI video generation more and more. I think in 2 or 3 years each of us will be able to create incredible videos and share our creativity with others)
100%… hopefully 😬
Great Job! This is what makes your channel unique!
Thanks very much indeed, I really appreciate the feedback and encouragement 😁😁😁
thanks for making this. I'm an aspiring filmmaker from INDIA and your videos cover a topic with such nuance and the right questions. always looking forward to the content made by you
Heyyyyyy!!! Thanks very much indeed 😁😁😁
I often wrestle with the dilemma of how to connect with narrative filmmakers and screenwriters who are interested in exploring AI as an up and coming production tool, so I really appreciate the feedback. It’s great to hear that I’m managing to connect with the intended audience 😁
Older the shoulder with truly consistent characters would be a game changer.
Character consistency is absolutely critical, but it’s only the top of the iceberg. The real goal we should all be discussing is CONTINUITY, and that’s a much bigger challenge.
Just stumbled across your channel and feel quite inspired by what you have shared. I made a music video (well my brother shot and edited it to be honest) last year to a song I'd written, but this year I find that my budget is quite restricted, so I wanted to see if I could do it using AI, and after watching your video's I feel that its totally possible. I signed up to a course on UDEMY to learn Midjourney, and I'm going to learn about Runway next hopefully. The big barrier for me regarding being able to make a video, would be lip sync, and being able to match lip movements to the song I'm recording. The more I can learn about that side of things, then the better equipped I will be when I finally (at some stage) start to put something together. Keep up the great work !
Thank you!
Hey, not at all, thank you for watching :-)
BTW, perhaps keep a list or spreadsheet which details all of the things you want to be able to do (lip synch, camera position, consistent characters, etc) and indicate whether it's currently possible and which tech to use. That would be super useful if you ended each of your videos with the current update of the list.
Thanks very much, I think you’re absolutely right, so I actually started work on that list this week. I did have something like that in mind, actually, but you helped get my focus back to it, so thank you, I really appreciate the nudge :-)
uploading an image and soundfile at the same time would be a great leap forward already.. great video, let's make you the filmmaker at runway :)
Cheers, mate! Wouldn’t that be an terrific role to have 😃
the new gen 3 is wild I also used it on my photos. just wild!
I have to admit, if I had the budget for unlimited, I’d put a lot more dedicated time into Runway.
This is very helpful information, love your channel! Thanks for sharing what you are testing and discovering on Runway gen 3 alpha
Hey! Thanks very much, I really appreciate the encouragement 😁😁😁
Maybe i need to try again. Its been a bit frustrating for me. Interested to hear you input.
Glad to help. Have you previously tried to bring one of your stories to life with AI and then got a little discouraged and given up?
@@HaydnRushworth-Filmmaker more still trying to feel the tools out to see what my guardrails are. I have a lot of old D&D adventures I’d like to bring to life. Baby steps. Can’t believe I didn’t think about reading the docs…
Great video! I need to try this with our Ai vids!
Cosplayers? You guys have the AI world at your feet and you should absolutely embrace it!!!! I’m a big fan of the idea that the best results will always end up being when you use AI tools to enhance human performances rather than replace them.
I tried using Runway. While I liked the visual results, the cost per attempt was a bit daunting. Plus, I still haven't found a way to generate consistent characters needed to produce a movie with Runway. After dabbling with a half dozen AI video packages during the past couple of months, the two that I used to create 4 shorts are LTX Studio and Vidu. I use LTX to develop the characters and dialog, along with Vidu to provide some visual punch at a price that doesn't break the bank. While the results aren't yet perfect, at least I can tell a complete story that doesn't take weeks to craft.
Absolutely love your approach. 100% endorse taking that kind of route. Well done for getting to the point of completed projects. Further down the line there’s going to be a time and place for spending bigger money for better tools, but for now, workflow productivity is vital.
@@HaydnRushworth-Filmmaker It may not be perfect, but the technology is good enough to craft some riveting yarns. I've been in contact with the crew at LTX and they've been taking my feedback seriously. As you pointed out in your video, what the developers of AI video products need is advice from filmmakers. I'll keep you posted.
*Wow, thank you for sharing this technique! I'll definitely be giving this method a try 🤗😏*
Hey, not at all, hope it works for you too. Like everything else in AI, nothing is guaranteed to work every time, so the best we can hope for is to improve the chances of success.
Thanks for sharing your knowledge
Hey, no worries. Thanks for watching :-)
Great video! Was having trouble with conversations, will try on my next!
Fantastic. If you get results that you’re pleased with, please post a link here in the comments so I can take a look 😁
Exciting stuff, indeed! Great tips! 🎉
Thanks so much, Tasha 😁😁😁
These are really great tips, thank you! 🙂
Hey, no problem, glad to help.
...and thanks for watching :-)
Congratulations! You've cracked a couple of things and pushed the tech a little further. Still a long way to go, but that's impressive. I haven't used things as extensively as you and was not aware of the lip synch capabilities - although only a couple of days ago I too tried writing text for a character to say... and like you, found it doesn't really work as yet
Thanks very much 😁
No, we’re not at that point yet, but wouldn’t that be a great leap forward? 😃
This is why im not starting official production of my movie series yet. Im waiting fot the tools to advance from their current state and thankfully... Weve seen good progress past few months. Its starting to kook promising
It really is looking promising, but my worry is whether the AI development companies will settle for a customer base of advert-creators and social content creators instead of creating tools for long-form content creators. I’m planning the next video and looking at this whole subject.
@@HaydnRushworth-Filmmaker looking forward to your coverage! We will keep our fingers crossed
Good work keep it up.Followed
Thanks very much, great to have you on board 😁😁
Great video bro, do you like runway gen3? How would you rate it from 1 to 10?
Thanks very much. As it happens, I think Runway is terrific. I'd say, between Kling, Luma and Gen-3, at the moment, Runway is the better overall quality, but in terms of usability and value for money, Kling is definitely King (for now). Runway is great, but incredibly expensive if you're still exploring AI. If you're creating AI content professionally, and a client is paying the bills, the Runway's unlimited plan would be ideal.
For me, for now, I'm touch and go about paying for Runway. I just cancelled it, and then re-started the plan again when image to video came out.
When real-time generation is achieved (and I don't think it will be much longer before this is possible) and every character is able to be imbued with a set of character traits, it should theoretically be possible to give more general direction to generated characters. They'll 'act like actors' taking direction in real-time.
I'm right there with you on this. I'm a great believer in the potential of video-to-video AI as the most feasible way to achieve the most authentic, human-like performances possible.
Keep writing your story folks. By the time you’re done, AI might be ready. This is exciting!
Every week it seems there are new developments that make AI tools even more fun to use.
I literally found this video AS I was struggling with getting Runway to do this exact shot. It kept rotating the camera and showing the woman's VERY off-model face. Still struggling with it but fingers crossed!
Fantastic, glad it helped. As an extra tip, for the reverse view (in case it crops up for you), I found I needed to specify the background of the location in the prompt, otherwise both characters ended up with the same skyline balcony view behind them.
Hand in there, good luck, and if you figure out any handy hints whilst you're tackling the shot, drop it here in the comments :-)
@@HaydnRushworth-Filmmaker Thanks for the vote of confidence! Amazingly writing "-neg camera movement" (without quotes) helped stabilize the camera. Also using Runway Gen-3's example prompts, I noticed the phrase "The camera trails behind them" and while that still had movement, it did not completely reveal the face so it was still usable.
All this AI video stuff is very new and look at all the things we can do already.
The good stuff will be here soon, it's just a matter of time.
That's what I'm hoping too. :-)
thank you
No problem, thanks for watching :-)
I wish there were an option to keep the same face for multiple scenes
Absolutely. My best workaround so far had been to pick two, public figures and use them in the prompt with the words “blended with” in between. “Margot Robbie blended with Kate Beckinsale, age 23, long blonde highlights hair” I find that gives me reasonably consistent results and avoids directly copying one person’s face. It’s a method that works reasonably well with most tools, except for the ones that don’t allow celebrity names in the prompt.
Could you use Hedra to get her lips saying something specific?
I'd forgotten about Hedra tbh, and since you mentioned it I realised I still had the Hedra website open in my "Next video research" tab group. Thanks for the reminder, it's a great looking tool.
@@HaydnRushworth-Filmmaker Yeah, I haven't used it but it does look cool. I uploaded an image that was too cartoony and it couldn't detect a face. (It was a realistic 3D render of a simple jelly with eyes and a mouth.) I then contacted via Discord and they replied saying it is something they are looking at in future - to support different art styles - but for now it looks great for real faces and realistic imagery. BTW, really appreciating your prompting tips and experiments to see what gets the best results.
It is difficult to learn gen3 if for 35$ you have only 22 videos :)
I agree entirely. On one of their training pages they say don’t be afraid to experiment… easy for them to say when they make money from our experiments 😁😆
If you take the 100$ Subscription you have unlimited Generations. ;)
They need to do something like Haiper, 2 and 4 sec videos are always free with a watermark.
How long is each video and is that the monthly cost? I estimate about 3-5 grand for a 2 hour+ film?
Comfy UI. That's the solution.
I think at this point, you have the ultimate pre-viz/ storyboard tool. Why submit a paper script to a producer when you can send a full rough draft of the movie? All narrative aspects of the movie can be worked out for time and story coherence before a dime is ever dropped on production. No need for expensive reshoots after it is screened for executives or limited test audiences.
I think you're absolutely right that the strongest argument for generative AI in its current state is as a selling tool or a pre-viz tool. There's definitely a real need for seasoned professionals to try to use these tools and get feedback to the companies who build them, otherwise they'll remain novelty services for content creators until the novelty wears off, and then progress will grind to a halt.
How do you perceive the cost on using different video generation tools?
It's an ongoing challenge for all of us. Trying to find the sweet spot between investing in your own skills development versus pouring money down the drain. I think we're all having to get used to the idea that we need to pay monthly fees of some kind or another to AI service providers, but the real trick is figuring out which ones are most valuable for you.
great video. got a sub from me.
Hey, thanks very much :-)
Yes, the indeed would be brilliant. I've been using Runway lip-sync for a couple months or more. I'm lucky if it will work at all if a scene isn't fairly straight on and there's much at all going on in the video. It works better going from image to lyp-sync, but then you have someone's mouth moving and not much else. I can't say I've ever got it to work to my satisfaction. It's always been clunky and hard to integrat. My [shadow-banned] "Pulp Fiction" used it rather extensively. There's some hope with Hedra giving more realistic results. But you got one of the best results I've seen. Ah, and until now, if you had two people in the same shot, they'd both mouth all the same words. Hot tip. Kling has the best quality animations for the buck, by far, though I believe their half off sale ends in a number of hours. My Kling animations (if I used the 35 credit "professional" option) stand out against Runway and Luma. Haiper is no longer able to compete. New kid on the block "VIDU" does the fastest motion I've seen, but at a miserable 360p.
Thanks for the heads-up re VIDU, good to know.
Agree entirely about the wooden performances from most AI lip image to lip sync videos, and yes, I've had trouble with two characters mouthing the same lines :-)
Still experimenting, but I'm hoping this "traditional" over-the-shoulder shot will make it easier to get better results because it focuses on just one person by default.
Your intuition tells yourself 'No tool fits for the script.'
I think you’re right, we’re a looooong way from any, single tool being capable of creating a finished feature film.
@@HaydnRushworth-Filmmaker Do you believe in God?
Dude, the AI tool you are asking for already exist.... HEDRA
I looked at Hedra and honestly don’t remember where I left it. I should go back and take a closer look 😁
its not perfect yet but give it just a mere few years if less then that and we will be able to create fully controllable scenes with better lip sync and direct camera control and all will be amazing!
I think you’re right, and I’m genuinely excited about the journey 😁
I can already hear the film crews complaining. lol
😆😆… time will reveal that they have little to worry about… at least, not yet.
It's amazing but so expensive😢
100% agree. The good news is that there are enough free tools around for us to develop our skills whilst figuring out ways to monetise.
Hey bud, I loved reading comments from the community and sharing some of my own, but your latest video has comments turned off
No way!?!? Thanks for the heads up, I didn’t realise.
Fixed! Thanks very much 😅
@@HaydnRushworth-Filmmaker I figured that, you seem to be one who values community and vice versa !
Here is how I feel about Runway Gen-3...we are literally paying them to train their machines. We shouldn't have to pay for some of the horrible results they give back. It's robbery.
That’s a really valid point. I hadn’t thought about that. Tbh one thing I really like about Midjourney is the opportunity to get a free hour of processing time whenever I spend 10-15 mins rating images. It would be great if Runway offered the same feature.
@@HaydnRushworth-Filmmaker That would be nice. Or a system in place when you reject a video, you get your credits back.
Sir can you Suggest some people who do a lot of experimenting with ai video tools and post it on the internet, I want to watch their video They are best to get knowledge for me as they have already used this tool so much that they know a lot of thing they can do, Can you Suggest some people to know ?
If you can't figure it out yourself you're lost. It's not that hard to figure out.
@@Bartetmedia How ?
I think you've already landed in a great place watching this video :-)
In reality, as soon as you begin to watch any video on generative AI, UA-cam should start to present more for you to watch. I'd just follow your curiosity, or search for specific subjects.
For a good start, though, check out Tao Prompts:
www.youtube.com/@taoprompts
@@HaydnRushworth-Filmmaker Thanks ( I know him )
3:50, she has 13 fingers 😅😅😅
😆😆😆, yeah, I was so thrilled at her facial expressions that I just let that part slide 😁
What u waiting for ?? Making your dream movie man!?!! Keep gone
Or u will waiting Sora??? Or 20 minutes movie generate?😊
I’m waiting to sea what in your head movie,,,…. peace from another side of the world 🎉
Hey! Thanks so much for the encouragement and support. I think you’re right, I need to really crack on with this :-)
Viewer from the future. If you think this is amazing wait until you see what you can do in two months with a LORA trained or Pulid Flux character generated image and Runway's Act One.
Touche! Agree entirely. Imagine if we could see what the state of AI will be like in six months from now!
@@HaydnRushworth-Filmmaker I suspect that mocap action performances will be viable a la Act One Plus so to speak and I suspect you will be able to divide facial and body performance soon too. Likely you will be able to assign performance to a character and it will all be image based like in traditional CGI. I mean CGI artists know the tech they want and it's the same tech filmmakers want. Of course like in AI image generation it will probably advance to even more photorealistic and maybe in a year or two backgrounds will not have the errors they have now.
You are being paid for this, aren't you? Anyway, I'd much prefer this technology for storyboarding only. Not for commercial use because the image quality is really bad. It looks like a bad video game cutscene from 2005 if you ask me. Plus, the fact that this whole Ai video generation is built on stolen content is beyond horrible and will be restricted by courts soon. Please tell your ai overlords to stop making these weird ass softwares and tell them to make a better version of Photoshop or Adobe Premiere cause that stuff is actually needed in the market.
As it happens, Runway didn’t pay me to share handy hints on getting better results for conversational over the shoulder shots, but that would definitely be something I’d be up for. In fact, I’d really love to have a conversation with somebody in their development or management team because I have a bucketload of questions for them.
It’s true that image quality isn’t great with AI, but that’ll get better. But the biggest issues I have with generative AI is the lack of real character control, along with all the continuity challenges.
In terms of photoshop improvements, I’m afraid I have no contact with Adobe at all (Adobe makes photoshop as well as Premiere Pro), so I’m afraid I can’t pass that part of the message along, but I’m sure they’d love your feedback.
this is compleatly rubbish
Why, thank you, I appreciate the feedback :-)
I'll work on making the next video less rubbish.
Very cool.
Thanks very much :-)
In a few years we will have the tools to make a feature film in your bedroom, which will be insane.
Not good for the film industry though lol.
AI is definitely producing some dazzling results already 😁