Thanks! Tip: Please refrain from putting your text prompts at the bottom or top of the video. When pausing or rewinding, in order to study the text prompts, UA-cam covers the text with it's own overlays. Make the video full screen, turn on subtitles, pause it and take note of where the overlays are covering your text.
This is EXACTLY the type of video tutorial I have been yearning for.....lol. Everything was explained in a detailed, easy to understand manner. You earned yourself a subscribe buddy! Keep up the good work.
So important to cover the cinematic camera movements and how to apply them in prompting. Thank you again. I am still struggling with face distortions in Runway and character consistency in image to video. Will go through all your tutorials to see if I can get tips for this 🙏
And the script in part by AI too. It’s interesting how one can pick up AI generated phrasing like with a certain quality to the image which tells you it’s AI generated. I think those who are pre AI will always detect these nuances.
Great video, very interesting and clear, thank you very much. A question: in Runway Gen 3 can you modify the shots taken with your camera by importing them into the program or ask to use them as a model for the development of other scenes? And if so, can it be useful to underline the shooting parameters so that they are consistent with the initial footage? Thanks!
I have SO MANY questions.. So, are these shots being achieved through a specific subscription package? Are there establishing images to aid the animation or just text to video? I know there's a method for prompt input but how are these images being accomplished? These are some of the best I've ever seen!
Now if someone can figure out how to do a dolly zoom shot in Runway, made famous in films like Vertigo and Jaws where the background appears to zoom away from the subject, that would be great. I've tried and tried with no luck. My other "wish list" item would be to control the time it takes for Runway to zoom in on something rather than zooming continuously to the end of the clip. If there's a way to do that, I haven't found it.
Thank you for such a detailed tutorial. I am just curiuos how many failed video generations were spent to create such a beautiful video scenes that we can see at this video?
Great video! One thing is not clear to me: what are the steps before using this prompt? Do you always start by generating an image on Midjourney and then you animate it via Runway?
Yes I always start with image generation first because I can set the initial frame of the video which is very important. Also I can set the consistent style and character. These days I’m using more FLUX than Midjourney.
One question when you create a prompt for a scene do you specify the camera shot in that prompt for the image generation? Or do you describe the scene for the image generation than by looking at that image you decide which camera motion to use?
The unlimited plan includes unlimited generations of Gen-1, Gen-2, Gen-3 Alpha, and Gen-3 Alpha Turbo in Explore Mode at relaxed rate. What does that mean? How long will I have to wait for my completed video clips?
Sir, i've been into this space for 2 or 3 weeks now and starting with using motion on leonardo, im know highly interested in starting to work on cinematics and filmmaking with AI. What is your current favorite of all competitors to use? i tend to runway, but am pretty unsure, as every tool is quite costly. Which one is your current recommendation ?
These days I mostly use Runway and Minimax but it’s difficult to predict future. Maybe I will use a different tool in 6 months because landscape is changing so fast
@@cyberjungle Sorry to say. I have subscribed to runway yesterday, unlimited. My conclusion, runway is very bad for image to video. The face is really different and the scene I want never happens. I hate it because everything is slow motion which is not interesting. Very uninteresting, I tried again with the same prompt in kling. The result kling gave much better results. Runway is very overrated. I am very disappointed and regret subscribing.
@@Holoview7 Sorry to say. I have subscribed to runway yesterday, unlimited. My conclusion, runway is very bad for image to video. The face is really different and the scene I want never happens. I hate it because everything is slow motion which is not interesting. Very uninteresting, I tried again with the same prompt in kling. The result kling gave much better results. Runway is very overrated. I am very disappointed and regret subscribing.
@@TukoCcc Yeah. I felt the same way as you did, until I actually figured out how to use it. If you follow this guys tutorials on Runway and actually implement it, you will receive MUCH MUCH better results than anything you would have expected. I also use Minimax and Kling. Although they are great for fast action scenes, they warp and distort more than Id like, plus they take a long time to render. Runway is VERY fast, making your workload 10X faster. Believe me, if you just started with Runway, you haven't even scratched the surface. It's clarity and speed make still make it better than anything out there. You just have to know how to prompt and utilize camera angles and motion prompts correctly.
How do you view this tutorial now that Runway added camera control in the interface? Is it still useful to write these camera instructions, or is using Runway with the camera control requires different prompting?
Hey Camera Controls don't work with Gen 3 (they only work with Gen 3 Turbo which is their inferior model) and they're pretty basic. Their official recommendation is also to write prompt if you want a complex camera motion 👍🏻
@@cyberjungle well, from all the testing i've seen, Turbo is just as good, and sometimes better than Alpha. And those basic camera control sliders provide the exact same camera movement you instructed on in this video (from pan, dolly, crane, tilt etc). And combining them together you can even almost create complex movements like Arc. I guess that some complex movements need to be prompted (and then you need to pray that Runway will actually understand and follow the prompt), but 95% of camera movements described in your video and used in videos can be made far more reliably using the camera controls. My actually concern, and I need to test this further, is if using the camera controls effect how well the characters inside the scene move. I thought you might have more experience than me on this, but I'm not sure you do. Which is fine. The camera controls are pretty new to everybody.
@ yeah if you think “Turbo is as good as Alpha sometimes even better” than I guess we have a huge gap between our quality standards. I just repeated what Runway team officially wrote in the User Interface of Camera Controls page. They say if you want a complex scene please write a prompt. Camera Controls are for simple actions. This isn’t my personal opinion. It’s official recommendation of the Runway Team. Good luck 👍🏼
I have only one question, do you initially animate midjourney images (image to video) or do you work with text prompt to video? Which of these ways is used in all those beautiful examples in you video?
🔎Thanks for cool video and tutorials! How do you structure your prompts for subjects and camera angles? Do you describe the Subject 1st and then prompt how the camera shot should be? Example: A Medieval Character, warrior outfit, on a horse riding into battle...camera angle front then panoramic view of army... Do you include camera angle in every scene or will AI figure it out? Thanks.🧐
Hey in Runway you don’t need to describe what’s on the image (if you’re using Image to Video) It’s enough to describe the camera motion or event. In text to video prompt structure you mention is totally fine 👍🏼 If you don’t include camera motion Runway will zoom in to character’s face most of the time. So it’s helpful to describe camera motion
NOOOOOOOOOO.... The worst thing to do. So many new vid gens are about to drop. Meta looks amazing. Sora might drop. Seaweed... Imagine what vid gen will look like in 1 year. There might be a new company who have cracked it.
Ok, so like, you are serious? 😢 😅😂 BTW please contact me ASAP.i have a money making experience that you won't believe. To get started just send me a check for a $1000 😅😅
These AI-generated videos always look cool in tutorials, but when you actually try to work with them, it’s a nightmare. Especially when you want to show movement-many times, the AI just can’t handle it, and you end up with anomalies as a result. Often, it’s nearly impossible to create two short clips that flow seamlessly next by next, and the quality is poor. It’s no surprise that almost no one works with them actively right now. Of course, in the future, things will be different.
Hey, I really like your content, thanks for all those advices, but I can't help but notice that, you're using the term 'filmmaker' but maybe 'Movie maker' might be more accurate since there's no actual film involved in this medium.🤓
For a film, you need constancy of characters and styles and context.. you have nothing of this in runway. In my channel I use runway, but you can't make a film with videos of 10s.
Midjourney V6.1 - Photorealistic Cinematic AI Photography Style Guide:
cihanunur.gumroad.com/l/midjourneyV6-style-guide
how to find Runwayml promo code? anyone help me please kindly?
AWESOME! THANK YOU SO MUCH!
Damn my guy you managed to do what Runway hasn't even done with their tutorials. Great work here!
You tutorials on Ai images and video production are the best I’ve seen so far, keep up the good work, you deserve a wider audience.
Thanks!
Tip: Please refrain from putting your text prompts at the bottom or top of the video. When pausing or rewinding, in order to study the text prompts, UA-cam covers the text with it's own overlays. Make the video full screen, turn on subtitles, pause it and take note of where the overlays are covering your text.
This was the most inspirational opening to a tutorial video I've ever seen. I was hooked before you even taught anything. :)
💚
You're awesome bro. Love your tutorials. Easy to follow, well explained.
This is EXACTLY the type of video tutorial I have been yearning for.....lol. Everything was explained in a detailed, easy to understand manner. You earned yourself a subscribe buddy! Keep up the good work.
We use Gen-3 Alpha all the time in our videos, and now we also finally have access to Adobe Firefly! ✨
They look wack as hell :D maybe choose another hobby
Thank you, you made me check out Runway ML after being away for year or longer. This stuff you showcased is mindblowing!
So important to cover the cinematic camera movements and how to apply them in prompting. Thank you again. I am still struggling with face distortions in Runway and character consistency in image to video. Will go through all your tutorials to see if I can get tips for this 🙏
Very helpful. Another great video from you, thanks!
You have gained a follower, from the way you have explained. I want to learn more, thank you. ❤
💚
Such a cool video.Awesome work!
The best of the kind and the last words of the video are inspiring.
💚
Excellently presented and beautiful, thank you for sharing
💚
Que aula maravilhosa...UAU!!!
This is fantastic, best tutorial ive seen.❤
Thank you for this video, great and very, very useful info. Big fan of what you bring to the masses. Thanks!
💚
First 🎉 Please make more of these! Thank you 🙏
Stunning mate. Very great tutorial 👏👏👏
This video is to review many times. Thank you🎉
💚
And the script in part by AI too. It’s interesting how one can pick up AI generated phrasing like with a certain quality to the image which tells you it’s AI generated. I think those who are pre AI will always detect these nuances.
AWESOME! THANK YOU SO MUCH!
I am happy to find this channel
💚
Thanks for sharing. Awesome work!
💚
Nice one, to use some of the tools one need to learn some basic film making techniques.
bro your tutorials are amazing! can't thank you enough for the work you do!
💚
Thank you for these tutorials... most people won't share their methods!
Marvellous...always to be kept close in the fourth monitor!!!(+1)
these videos given me interesting idea for my next project, Thank you.
Gran trabajo felicitaciones y muchas gracias me sirvio para experimentar y me gustó el resultado.
Good job!
Great video, very interesting and clear, thank you very much. A question: in Runway Gen 3 can you modify the shots taken with your camera by importing them into the program or ask to use them as a model for the development of other scenes? And if so, can it be useful to underline the shooting parameters so that they are consistent with the initial footage? Thanks!
amazing! thanks !!!!
Amazing!! Thank you so much!!!!!! wouaaaaaaaaaa
Thanks! Great work!!!
Make a video on AI commercial and AI music Video courses like this one. Would be of tremendous value
More of these please
All SO Amazing!! 🎉✨👏🏼💯
💚
wow i need this thank you
Thank you ! Great work !
Very instructive video, thanks! I think this might help me improve the prompts generated by Melies AI !
Apart from Good tutorial - you used models in full glamorous / attractive way
thanks for great tutorial!
I have SO MANY questions..
So, are these shots being achieved through a specific subscription package?
Are there establishing images to aid the animation or just text to video?
I know there's a method for prompt input but how are these images being accomplished? These are some of the best I've ever seen!
Now if someone can figure out how to do a dolly zoom shot in Runway, made famous in films like Vertigo and Jaws where the background appears to zoom away from the subject, that would be great. I've tried and tried with no luck. My other "wish list" item would be to control the time it takes for Runway to zoom in on something rather than zooming continuously to the end of the clip. If there's a way to do that, I haven't found it.
Minimax can do vertigo effect pretty well
I was just thinking of this!!
Great images! Did you make all of these with text-to-video in Runway, or did you generate the images first? What did you use for the images?
Thank you! These are all image to video. I generated all images using Flux Realism model on Freepik 👍🏼
You are great bro thanks
19:01 what would you call this shot where the camera flies over the walkers and focuses on the building in the distance? That was a beautiful shot!
It's impossible to do a simple STATIC shot.
thanks - great video:)
great video thx
You're the best bro. Adamsın.
Çok teşekkürler
Is there user friendly website, which can generate exact cinematic constant character from one reference image?
Take a look at this video: FLUX + LORA and Kling AI (Consistent Characters & AI Videos with Your Face)
ua-cam.com/video/mUR8CUmDbo0/v-deo.html
Masterpiece
💚
Thank you very much!
thanks brava!
Great 👍 .
i love the video, however, i am more interested in how to generate these videos, prompts and all
You can use an editor to zoom....less complicated prompts for the AI
Thank you for such a detailed tutorial. I am just curiuos how many failed video generations were spent to create such a beautiful video scenes that we can see at this video?
Good question! I had average 2,3 tries per footage.
@@cyberjungle Your certain answer is even better than my initial question 👍 Thank you.🙂 Now I know what to expect.
What is a good plan to get on runway and mid journey for creating film 🎥
Emeğine sağlık ❤
Great video! One thing is not clear to me:
what are the steps before using this prompt?
Do you always start by generating an image on Midjourney and then you animate it via Runway?
Yes I always start with image generation first because I can set the initial frame of the video which is very important. Also I can set the consistent style and character. These days I’m using more FLUX than Midjourney.
Thank you!
תודה!
One question when you create a prompt for a scene do you specify the camera shot in that prompt for the image generation? Or do you describe the scene for the image generation than by looking at that image you decide which camera motion to use?
The unlimited plan includes unlimited generations of Gen-1, Gen-2, Gen-3 Alpha, and Gen-3 Alpha Turbo in Explore Mode at relaxed rate. What does that mean? How long will I have to wait for my completed video clips?
Sir, i've been into this space for 2 or 3 weeks now and starting with using motion on leonardo, im know highly interested in starting to work on cinematics and filmmaking with AI. What is your current favorite of all competitors to use? i tend to runway, but am pretty unsure, as every tool is quite costly. Which one is your current recommendation ?
These days I mostly use Runway and Minimax but it’s difficult to predict future. Maybe I will use a different tool in 6 months because landscape is changing so fast
Nice video,,,,
in your opinion which is better image to video between Kling Ai and Runway in terms of facial consistency?
Runway
I 2nd that. Kling is really good for action scenes, but Runway is much better for facial consistency and all around clarity of video.
@@cyberjungle Sorry to say. I have subscribed to runway yesterday, unlimited. My conclusion, runway is very bad for image to video. The face is really different and the scene I want never happens. I hate it because everything is slow motion which is not interesting. Very uninteresting, I tried again with the same prompt in kling. The result kling gave much better results. Runway is very overrated. I am very disappointed and regret subscribing.
@@Holoview7 Sorry to say. I have subscribed to runway yesterday, unlimited. My conclusion, runway is very bad for image to video. The face is really different and the scene I want never happens. I hate it because everything is slow motion which is not interesting. Very uninteresting, I tried again with the same prompt in kling. The result kling gave much better results. Runway is very overrated. I am very disappointed and regret subscribing.
@@TukoCcc Yeah. I felt the same way as you did, until I actually figured out how to use it. If you follow this guys tutorials on Runway and actually implement it, you will receive MUCH MUCH better results than anything you would have expected.
I also use Minimax and Kling. Although they are great for fast action scenes, they warp and distort more than Id like, plus they take a long time to render. Runway is VERY fast, making your workload 10X faster.
Believe me, if you just started with Runway, you haven't even scratched the surface. It's clarity and speed make still make it better than anything out there. You just have to know how to prompt and utilize camera angles and motion prompts correctly.
How do you view this tutorial now that Runway added camera control in the interface? Is it still useful to write these camera instructions, or is using Runway with the camera control requires different prompting?
Hey Camera Controls don't work with Gen 3 (they only work with Gen 3 Turbo which is their inferior model) and they're pretty basic. Their official recommendation is also to write prompt if you want a complex camera motion 👍🏻
@@cyberjungle well, from all the testing i've seen, Turbo is just as good, and sometimes better than Alpha.
And those basic camera control sliders provide the exact same camera movement you instructed on in this video (from pan, dolly, crane, tilt etc). And combining them together you can even almost create complex movements like Arc.
I guess that some complex movements need to be prompted (and then you need to pray that Runway will actually understand and follow the prompt), but 95% of camera movements described in your video and used in videos can be made far more reliably using the camera controls.
My actually concern, and I need to test this further, is if using the camera controls effect how well the characters inside the scene move. I thought you might have more experience than me on this, but I'm not sure you do. Which is fine. The camera controls are pretty new to everybody.
@ yeah if you think “Turbo is as good as Alpha sometimes even better” than I guess we have a huge gap between our quality standards. I just repeated what Runway team officially wrote in the User Interface of Camera Controls page. They say if you want a complex scene please write a prompt. Camera Controls are for simple actions. This isn’t my personal opinion. It’s official recommendation of the Runway Team. Good luck 👍🏼
I have only one question, do you initially animate midjourney images (image to video) or do you work with text prompt to video? Which of these ways is used in all those beautiful examples in you video?
Image to Video (animated images)
@@cyberjungle than you so much!!!
can i use my video and it adds some like flowers blooming or grass growing im my video ,without changing subyects in my video?
Tem os prompts escrito em alguma lugar ?
you really have nailed many things, but i want to know how can i create a consistent character for my music video...i am struggling a lot for that...
ua-cam.com/video/mUR8CUmDbo0/v-deo.htmlsi=3HHZM2Cff5kEdnkJ
🔎Thanks for cool video and tutorials! How do you structure your prompts for subjects and camera angles? Do you describe the Subject 1st and then prompt how the camera shot should be? Example: A Medieval Character, warrior outfit, on a horse riding into battle...camera angle front then panoramic view of army... Do you include camera angle in every scene or will AI figure it out? Thanks.🧐
Hey in Runway you don’t need to describe what’s on the image (if you’re using Image to Video) It’s enough to describe the camera motion or event.
In text to video prompt structure you mention is totally fine 👍🏼
If you don’t include camera motion Runway will zoom in to character’s face most of the time. So it’s helpful to describe camera motion
@@cyberjungle Thanks! I've been looking at how it's structured and others include camera angles in the prompts.
I constantly get flagged by runway as if I am trying to produce objectionable content for normal prompts.
I couldn’t access runway Gen 3 webpage on my Mac… it doesn’t open.. please how do I access it?
app.runwayml.com/login
I haven't been able to make anything like this
Why not?
@cyberjungle it just doesn't come up this real. I even try midjourny first then upscale and turn into videos. Good but not realism
just wondering wt will happen to real actors
Well where can we get these stories that people are telling... I wanna check em out.
Google Gen:48 or any other AI film festival showcase. You will see plenty of great personal stories (and a lot of sci-fi stuff)
How can we proof/trust/believe that all the shots and images on this video are 100% iA? Or true video based at some porcentage???
Hey evryone.. I am about to pay $730 for Kling AI... is it a right move? what would you recommend ?
NOOOOOOOOOO....
The worst thing to do.
So many new vid gens are about to drop. Meta looks amazing. Sora might drop. Seaweed...
Imagine what vid gen will look like in 1 year. There might be a new company who have cracked it.
No
Ok, so like, you are serious? 😢 😅😂 BTW please contact me ASAP.i have a money making experience that you won't believe. To get started just send me a check for a $1000 😅😅
@@arthritic your comment got removed. Chuck me 50% of that 1000 bucks and I'll show u hack to not get censored... 😘
Nope nope nope
Strange one thing, all ai videos looks like slow motion..
♥♥♥
💚
1:27 i have a lot of step- sisters story to tell.
😂
are these your generations?
Yupp
😍
Can you send link please
runwayml.com/product
What about MONETISATION?❤
Dude playing Violin is doing it with 6 fingers, dude shooting a gun, is doing it with % then one disappear.
🥰🥰🥰🥰🥰🥰🥰🥰
These AI-generated videos always look cool in tutorials, but when you actually try to work with them, it’s a nightmare. Especially when you want to show movement-many times, the AI just can’t handle it, and you end up with anomalies as a result. Often, it’s nearly impossible to create two short clips that flow seamlessly next by next, and the quality is poor. It’s no surprise that almost no one works with them actively right now. Of course, in the future, things will be different.
You’re correct. There is certainly room for improvement. As you said, these tools are first gen. Future gens will be 100x better in next 5 years.
how would one prompt to get the camera to circle around a subject or landscape or object?
Try: We orbit around the {subject}, hyper dynamic movement in orbiting motion, subject in focus, cinematic style
Is it free
Hey, I really like your content, thanks for all those advices, but I can't help but notice that, you're using the term 'filmmaker' but maybe 'Movie maker' might be more accurate since there's no actual film involved in this medium.🤓
Bu aksan Turk aksani degilse ben de bir sey bilmiyorum :)
Doğrudur :)
@@cyberjungle eline saglik cok guzel kanal....
Armenian accent?
Kling is far better in my experience.
For a film, you need constancy of characters and styles and context.. you have nothing of this in runway. In my channel I use runway, but you can't make a film with videos of 10s.