@@taoprompts The only bad thing is when you try to do something several times and it turns out poorly, credits from the paid subscription are deducted for this. Sometimes you have to make several attempts to get a good result, and if it turns out badly, the credits go to waste. I even bought the cheapest subscription for testing, and the results were poor, while 1000 credits run out quickly.
Thanks, I really appreciate that 👍. A lot of these platforms have talked about potentially creating up to 3-5 minute videos but based on what I've seen, it's really hard to keep the video consistent for that long.
Great tips as always! Gen-3 definitely has issues with changing the contrast and color saturation of your image, which is a total pain when you've edited the image in Photoshop only to have Gen-3 destroy it. The tips you covered here make a big difference. Another tip is to always keep in mind that your image is either the first or last frame of the clip, so make sure you're asking Gen-3 to do something that actually makes sense with the image, otherwise you're likely to get the dreaded "zoom in to nowhere" result.
The color change was one of the most frustrating parts about Gen3. I'm surprised they haven't addressed how to deal with that on their socials, it's probably discouraged a lot of people from using their tool. And great tip about keeping a descriptive prompt!
Awesome video! Just started using Runway this past Sunday. Some shots come out great, but they don't adhere to what you ask for even with the most simple prompts (like asking for a character to "ascend," which is even a keyword in their prompting guide). Most of the other times you get nothing what you are looking for. Going to try out some of your suggestions! But hands down, Kling AI and Hailuo have given me the best renders with basic prompting. I think I will end up throwing Luma in the trash, lol 😂 It's been the worst out of the four I am using so far
Hailuo Ai is amazing at following prompt directions! Maybe the most responsive right now. Luma was great a couple months ago, but they just haven't been able to keep up 😭
Good stuff! it's promising to see where it's headed despite the hallucinations. Being trained on tons of Hollywood films would speed this process up, but also kill an entire industry much faster lol still impressive either way. Even a year from now I expect it to understand the language of film much better.
Long way to go with AI video till it starts to look real believable in my opinion but fast improvements made me believe that it's going to happen much sooner than I think.
Super helpful! Thanks a lot! I often face issues of portraits which zoom in and the face stays so "static/frozen", I've tried adding blinking to get rid of the "stare look" but haven't been really successful. Maybe I need to move it to the front, you gave me some new ideas! Thanks man!
👍🏾👍🏾You are Awesome, my Brother!! Many Thanks for all that you do, create and share with us, my Friend. Best Wishes to you and your Family. New Subscriber here. Cheers
The biggest tip when using Runway Gen-3 is to subscribe to unlimited. Unfortunately, many prompts don't work as we want and the videos can glitch. However, you can also get great, natural results. But without unlimited you can go bankrupt before you create something epic
In AI’s light, Tao shows the way, Runway Gen-3, prompts hold sway. A woman floats, colors must align, Desaturated hues keep the scene fine. Camera moves, Tao's wisdom bright, "Static shot," controls the flight. Eyes zoom in, face stays clear, Tao refines, prompts sincere. Reference frames, end the scene, Muted colors, a subtle sheen. Tao's guidance, prompts precise, Crafting visuals, pure and nice. Through Tao’s tips, our visions play, In cinematic wonders, prompts hold sway.
I want to so an animation cartoon type video...what program do you recommend for that...like is Kling better than Luma for cartoon/pixar type? Or Pika? Thank you!
If you mean 3D pixar/game engine style I found Luma works really well. Kling also works pretty good for game engine style vids. For 2d cartoon animations I haven't seen any platforms that can do good animations.
Hello! I just tried to sign up but when they tried to do the security verification on me and asked me to move the puzzle piece to complete the picture, it did not let me grab the piece. Would you have any guess?
I wonder how could I use AI to make simple bookstagram other way than generating pictures to prepare my own mockups. Mockup with ipad is easy but it's harder with light for mockup with real books. Do you think there is AI that could do something better than this with uploaded cover?
I think Midjourney should be able to generate book covers pretty easily, you could even start with an illustration you like and the use the editor feature to zoom out into a book cover.
It's expensive for trial and errors, a simply prompt like "woman smiling" in image to video resulting in the woman smiling and then deforming into a completely different person, and watching 50 credits just vanished. Another UA-camr recommended "camera on tripod" so the video doesn't moved and deforming into something else, this time, I got the woman to smile and then her head turned into......a camera on a tripod, there goes another 50 credits.
Those sound like extreme cases, I haven't seen any deformations where someone's head turns into a camera. Typically the smaller the face/head area is in the video, the more deformation will occur. If the face/head takes up a large portion of the image, gen3 has worked well for me.
Can't I increase the length of the moving image created on the runway in the editing program?? Currently, the length doesn't increase, so I'm using hundreds of them... It's a video that feels choppy.
Toa, I just getting started with Kling and made a couple of car videos on a different channel, using MJ image to video Kling . If the car is on a straight road it gens OK, but on a bend in the road it tends to warp out of shape any suggestions?
You could try using the negative prompt and put in words like "blur, distort deform" But tbh I haven't seen anyone generate cars in bends, they typically are driving straight on the road, that may be a difficult case for the Ai to handle
@@taoprompts That's why you haven't seen any gens of cars on bends, it just doesn't work sadly, hopefully AI 2.0 can produce one day. Thanks will try those negative prompts.
Sir but i Also use runway ml gen 3 alpha turbo to make image into animation and for animation i take prompt from chat gpt but result is bad sir please give some advice
I wouldn't try prompts from ChatGPT unless you have a plugin specifically designed for Gen3. Start simple with prompts "muted colors, subtle motion, [camera movement], [subject movement]". Small simple movements work best
@@taoprompts sir I am making cinematic short movie on youtube sir for that I want realistic scene like hand moment body moment Motion facial expressions ex I create image from Leonardo ai and runway ml for make image into animation how I get best results from runway ml
so I HAVE to waste credits in order improve the outcome? as far as I know, you have about 625 in the basic, and they run out fast. the pricing strategy seems kind of ridiculous. Also as a free user you have a few starting credits and thats it then. Luma Dream Machine, free version takes forever, but you can generate 30 clips a month for free. I know its computationally intensiv, but I'm still a bit baffled about the pricing plans of AI generative programms like Runway. I almost never encounter anyone talking about the economics as well.
It is expensive to use, Ai video is costly to generate from a computational perspective. Runway has taken some steps to reduce the cost with Gen3 Alpha Turbo, but it is still one of the most expensive platforms.
Thank you very much ! Please share more runway tips and tricks.
Sure thing, I've got plenty more videos planned 👍
You are as reliable as the sun, thank you!
😎🌞
Best teacher ever ♡
Thank you! That means a lot to me
This is terrific, thank you Tao! I did notice that Runway responded differently to the plural forms too. So interesting. Thank you again!!
It's always interesting how different the behavior is even with subtle word changes 👍
@@taoprompts I agree! I posed a couple o new vids, IFF you have time. :):) No Pressure!
Thank you so much. Please, keep making videos about Runway prompts. Cheers from Brazil
Of course! Runway is a good tool. And hello to Brazil 👋
Thank you for your efforts !!
I appreciate that, thank you 🙏
I learned so much from this video. Thank you !!!
That's great! Gen3 relies on the prompts a lot to get good videos.
This is so helpful. Sincerely thank you Tao
Great to know this was helpful, Runway works really well once you know how to prompt in it
Useful information! Thank you very much!
Gen3 can do a lot with the right prompts and images 👍
@@taoprompts The only bad thing is when you try to do something several times and it turns out poorly, credits from the paid subscription are deducted for this. Sometimes you have to make several attempts to get a good result, and if it turns out badly, the credits go to waste. I even bought the cheapest subscription for testing, and the results were poor, while 1000 credits run out quickly.
the one issue with this videos is that they are short! keep up the amazing work! you're such an inspiration! i love to learn from you! ♥
Thanks, I really appreciate that 👍. A lot of these platforms have talked about potentially creating up to 3-5 minute videos but based on what I've seen, it's really hard to keep the video consistent for that long.
@@taoprompts thanks for your reply ✨ although I was talking about the duration of yours videos 😆
Sir you always give best content you are the best
Thanks man!
Great tips as always! Gen-3 definitely has issues with changing the contrast and color saturation of your image, which is a total pain when you've edited the image in Photoshop only to have Gen-3 destroy it. The tips you covered here make a big difference. Another tip is to always keep in mind that your image is either the first or last frame of the clip, so make sure you're asking Gen-3 to do something that actually makes sense with the image, otherwise you're likely to get the dreaded "zoom in to nowhere" result.
The color change was one of the most frustrating parts about Gen3. I'm surprised they haven't addressed how to deal with that on their socials, it's probably discouraged a lot of people from using their tool.
And great tip about keeping a descriptive prompt!
My guy is genius
Awesome video! Just started using Runway this past Sunday. Some shots come out great, but they don't adhere to what you ask for even with the most simple prompts (like asking for a character to "ascend," which is even a keyword in their prompting guide). Most of the other times you get nothing what you are looking for. Going to try out some of your suggestions! But hands down, Kling AI and Hailuo have given me the best renders with basic prompting. I think I will end up throwing Luma in the trash, lol 😂 It's been the worst out of the four I am using so far
Hailuo Ai is amazing at following prompt directions! Maybe the most responsive right now.
Luma was great a couple months ago, but they just haven't been able to keep up 😭
Good stuff! it's promising to see where it's headed despite the hallucinations. Being trained on tons of Hollywood films would speed this process up, but also kill an entire industry much faster lol still impressive either way. Even a year from now I expect it to understand the language of film much better.
If they keep up the pace of improvement, in a year from now the quality will be pretty insane. Even now a lot of these clips look super realistic
thanks for your good videos. they are helpful
I appreciate that, it's always good to know these videos are helping out 👍
Long way to go with AI video till it starts to look real believable in my opinion but fast improvements made me believe that it's going to happen much sooner than I think.
They keep rolling out updates, I'm looking forward to what comes out in December/January
Thanks, wonderful❤
Super helpful! Thanks a lot! I often face issues of portraits which zoom in and the face stays so "static/frozen", I've tried adding blinking to get rid of the "stare look" but haven't been really successful. Maybe I need to move it to the front, you gave me some new ideas! Thanks man!
Changing the word does help a lot of the time, things closer to the front seem to get "weighted" more heavily.
Thanks for a another great tutorial!
Thank you! Your music videos are awesome btw
@@taoprompts thanks bro!
Tao you are killing it! You're where I want to be right now. So kudos to you!
Hey Evan, I've followed your content on AI comics. Great work man!
Genius. Thank you.
I appreciate that Luna!
excellent tips bro...keep going
Thanks man 👊, I appreciate that
wonderful vedio thanks
Great as always. Thanks
Glad these tips helped Sean 👍
I really like your videos, thank you! :)
That's great to hear 🙏, I hope some of these tips helped you out
Amazing!
👍🏾👍🏾You are Awesome, my Brother!! Many Thanks for all that you do, create and share with us, my Friend. Best Wishes to you and your Family. New Subscriber here. Cheers
Thanks for the sub man 👍. I got plenty more guides like this on the way!
@@taoprompts Very welcome, Bro!! Awesome, looking forward to more. 🙏🏾
Great video!
The biggest tip when using Runway Gen-3 is to subscribe to unlimited. Unfortunately, many prompts don't work as we want and the videos can glitch. However, you can also get great, natural results. But without unlimited you can go bankrupt before you create something epic
For sure, without the unlimited plan I keep finding myself having to buy credits over and over again.
Tao the man
🙏I appreciate that bro
In AI’s light, Tao shows the way,
Runway Gen-3, prompts hold sway.
A woman floats, colors must align,
Desaturated hues keep the scene fine.
Camera moves, Tao's wisdom bright,
"Static shot," controls the flight.
Eyes zoom in, face stays clear,
Tao refines, prompts sincere.
Reference frames, end the scene,
Muted colors, a subtle sheen.
Tao's guidance, prompts precise,
Crafting visuals, pure and nice.
Through Tao’s tips, our visions play,
In cinematic wonders, prompts hold sway.
Chat GPT works wonders man 🔥
@@taoprompts aww yee
thanks
Can you do a Midjourney Alpha consistent character video? A lot of us newbies never used discord.
I plan to do a workshop style video on that this week or the next 👍
There needs to be some tech that understands the uploaded image and then suggest few prompts to try on. That would be so awesome.
That would be great, maybe a function that roughly describes the image from the Ai's perspective
Can I ask you, does Luma and/or Kling allow for lip syncing? I looked and didn't see anything?
I don't think that's possible with Luma or Kling atm, I think they can only generate videos right now
@@taoprompts You're right, sadly, though. Thank you!
I want to so an animation cartoon type video...what program do you recommend for that...like is Kling better than Luma for cartoon/pixar type? Or Pika? Thank you!
If you mean 3D pixar/game engine style I found Luma works really well. Kling also works pretty good for game engine style vids. For 2d cartoon animations I haven't seen any platforms that can do good animations.
@@taoprompts okay thank U!!! Yes, Bread was done w Pika, i agree it was so dark and blurry. 😳😳
@@taoprompts Thank you very much Tao!
Hello! I just tried to sign up but when they tried to do the security verification on me and asked me to move the puzzle piece to complete the picture, it did not let me grab the piece. Would you have any guess?
You have to drag the puzzle handle slider beneath the actual image.
Valeu da hora .
fico feliz em saber que isso foi útil 👍
Can Runaway gen 3 output 4k videos? Or what’s the max resolution output?
Right now its 1280 x 768
I wonder how could I use AI to make simple bookstagram other way than generating pictures to prepare my own mockups. Mockup with ipad is easy but it's harder with light for mockup with real books. Do you think there is AI that could do something better than this with uploaded cover?
I think Midjourney should be able to generate book covers pretty easily, you could even start with an illustration you like and the use the editor feature to zoom out into a book cover.
It's expensive for trial and errors, a simply prompt like "woman smiling" in image to video resulting in the woman smiling and then deforming into a completely different person, and watching 50 credits just vanished. Another UA-camr recommended "camera on tripod" so the video doesn't moved and deforming into something else, this time, I got the woman to smile and then her head turned into......a camera on a tripod, there goes another 50 credits.
Yep that's AI
Damn! That sucks that it has to be so expensive!
It's useless unless you get an unlimited membershi p
It’s way more expensive to shoot in real life.
Those sound like extreme cases, I haven't seen any deformations where someone's head turns into a camera. Typically the smaller the face/head area is in the video, the more deformation will occur. If the face/head takes up a large portion of the image, gen3 has worked well for me.
Can't I increase the length of the moving image created on the runway in the editing program?? Currently, the length doesn't increase, so I'm using hundreds of them... It's a video that feels choppy.
The longest you can do in Runway atm is a 20s video using an image reference with the first and last frame feature
Toa, I just getting started with Kling and made a couple of car videos on a different channel, using MJ image to video Kling . If the car is on a straight road it gens OK, but on a bend in the road it tends to warp out of shape any suggestions?
You could try using the negative prompt and put in words like "blur, distort deform"
But tbh I haven't seen anyone generate cars in bends, they typically are driving straight on the road, that may be a difficult case for the Ai to handle
@@taoprompts That's why you haven't seen any gens of cars on bends, it just doesn't work sadly, hopefully AI 2.0 can produce one day. Thanks will try those negative prompts.
Okay yes, what's with runway aging the subjects😂 but it does help sometimes when i add "young woman" instead of just woman
I find that sometimes you need to roughly describe the reference image itself to keep the Ai video consistent
Can you please do more face prompts videos
For sure, I have a lot more guides like this planned soon
To dezoom I don’t why but I use « full shot » it work everytime
Ohhh that's a good one thank you
Thanks for the tip, I'll definitely use that
💓💓💓💓💓💓💓💓
Sir but i Also use runway ml gen 3 alpha turbo to make image into animation and for animation i take prompt from chat gpt but result is bad sir please give some advice
I wouldn't try prompts from ChatGPT unless you have a plugin specifically designed for Gen3. Start simple with prompts "muted colors, subtle motion, [camera movement], [subject movement]". Small simple movements work best
@@taoprompts sir I am making cinematic short movie on youtube sir for that I want realistic scene like hand moment body moment Motion facial expressions ex I create image from Leonardo ai and runway ml for make image into animation how I get best results from runway ml
Are you Jason Mendoza? 🤔😁😁
I might be 👊
so I HAVE to waste credits in order improve the outcome? as far as I know, you have about 625 in the basic, and they run out fast. the pricing strategy seems kind of ridiculous. Also as a free user you have a few starting credits and thats it then. Luma Dream Machine, free version takes forever, but you can generate 30 clips a month for free. I know its computationally intensiv, but I'm still a bit baffled about the pricing plans of AI generative programms like Runway. I almost never encounter anyone talking about the economics as well.
It is expensive to use, Ai video is costly to generate from a computational perspective. Runway has taken some steps to reduce the cost with Gen3 Alpha Turbo, but it is still one of the most expensive platforms.
I typed "camera pans to the left and revealing.... " And it just kept static
It won't work for every image reference, I find it works most consistently for landscapes style videos
I bought when I got login info?
Use the same account you created when you bought the subscription
Where do I download it ?
runwayml.com/
Due to cost due you think pika is better due to having a decent cost for unlimited generations in a chill mode then use the credits to refine
I haven't had a lot of luck with Pika, I think it's one of the weaker AI video platforms atm
The pricing for it is terrible for the credits you get. And it doesn't help that it wastes credits by not listening to simple prompts.
It is fairly expensive unless if you get the unlimited plans. Try using Gen3 Alpha Turbo mode which consumes less credits than Gen3 Alpha.