@@The.Sky.Driver Why would it be? Can't you abstract from its current state and extrapolate its development? Take Wolfenstein 3D from 1982 as a base and extrapolate the advancement of technology to Assassin's Creed, et voilà, it was not a joke, only limited people could have thought it in 1982.
@@The.Sky.DriverIt was just introduced in Nvidia's Event: ua-cam.com/video/3qSQjRaseos/v-deo.html At the BYD section. It still doesn't run on a usual GPUs, it requires hundreds of Teraflops yet and huge amount of RAM but they did it with a car commercial video, real-time. Give it time, and it will run in your desktop PC.
I have the same mindset as merion and so do many researchers (there are papers talking about this possibilities), glory to machines, glory to the Universe.@@The.Sky.Driver
"This is not about replacing you, it's about elevating you..." Unless you are an actor, set designer, set manufacturer, prop director, motorcycle manufacturer, costume creator, lighting engineer, gaffer, stage manager, makeup artist, boom operator, etc. 😂
I doubt it’ll replace those people. It isn’t good enough yet by a long shot. It opens up possibilities for hobbyists who can’t afford to pay for a crew for simple projects though.
@@farmersuiticles It's currently cutting into concept art pretty hard. Quite a few of the folks I know who made a living doing concept for pre-pro and pitches have seen a decent amount of that dry up with clients saying they've swapped to MJ or similar. Other guys are saying they're being handed MJ images and asked to "tweak" for a fraction of the time they would normally work/be paid for. I think costume design isn't too far behind that - set design will take a minute longer just because its so specific to the stage/action that it's hard to find enough data to support design efforts for crew/directors that make sense. What I think is more likely is that audiences start seeing film as a boutique art form - shows like Barbie and Oppenheimer over your average rom-com or comedy fare - and most folks do more of what's occurring now: seeking entertainment from games and creator content online. Source: I'm a set designer and art director in film, tv, and theme parks.
ive been playing with runway, blasted through about 500 seconds worth of credits. I wasnt using photorealistic images of people, but there was about 50 or 60% of the videos rendered had really poor results. a lot of collapsing into nothing, or stretching. I've also had some awesome results too. Point is, its a bit of a gamble so if you dont have money to blow on attempts and brainstorming pick your images carefully.
I remember struggling with the first non-linear editing software, way, way back when I was doing my Masters in Film. In the end we had to give up because it crashed so much! After having a long, long break from ambitious filmmaking, I love that you make this look so easy. I'm excited to start playing with AI!!
Some of the outputs I have seen have a dreamlike quality to them. I feel like there could be a style choice to use the footage as is for some situations.
Max quality is the goal, and kudos if you have figured it out, because hours of work using it has looked like garbage, despite the super high quality of the starting images.
Have you tried processing the video with Runway's super-slow-motion plugin and then speed it up in your video editor? I found that it helps with the choppiness as well (though not with the upscaling) and might be an initial solution for those who just want to play around a bit and get slightly better results than the raw image-to-video output - without paying an extra 300 for Topaz. But yeah, for professional purposes, your process certainly is the way to go.
Are you referring to changing shape of the object in Runway Ml? My issue is when i tried to created a video , which shows a huge disposable cup flying.. the texture of the disposable cup changes as it fly... really annoying... by the way, I tried in in the Runway free option... i tried countless time .. did not come right... i even tried Pika .. the quality of Pika is just not right
@@Shhvisualcontent I think you are referring to artefacts of sorts. Sometimes, a random color or shape takes a life of its own and is taken to be an object when it is not. If that is what you are referring to, the above is not related this issue. If you are starting with an image rather than a prompt, it may help to crop the image a bit. Sometimes that is enough for the algorithm to treat the content differently. But it does not always work.
You can change the ratio in Midjourney by using the "--ar x:x" like "--ar 16:9" or "--ar 9:16" . That will help get everything started (and don't forget to adjust settings in your video editor too) :)
Thanks! Do you know if there is a way to have for example a 30 second video and then put on tip of the video an IA skin? Im doing a CiFi movie and im trying to do the outside world look futuristic but i want to use real city footage and only put like a “skin” on it. Thank u very much
One day we will look back and say "wow remember when you could only generate 4 seconds of AI generated frames?!" in the same way we look back at a main external storage device held 2.8 MB of data!
Thank you for your video. I am just getting into this. Why do you suggest not adding a description to your image, i.e telling the AI in runway what you want it to do? How do i give it somewhat specific commands/suggestions to an existing image?
We go through all of this in our course. However, description are vital because the images are based on your description (prompt). This is the mechanism behind nearly all generative AI tools :)
I've been using Runway for the past month or two now. Loving what I am capable of doing with it but do you happen to have any info on Image + Description? Some shots I get that come out super good and others just poop. Which sucks because as mentioned these renders are not free. I probably spend roughly $20-$30 per video I've done (which honestly is super cheap when you think about it) but yeah I've been having some issues on my recent project that I'm currently working on and with Runway short film contest coming up this weekend, I figured id dive in and see what other users out there could potential provide some helpful insight. if you have any food for thought please feel free to respond. That all said great video breakdown and I hope more is to come! Cheers
i wish not interpolating in the gen 2 generation reduced the compute costs. currently I think it costs you the same whether you interpolate or not, which can't be right. anyway, great vid.
There is already a South Park episode created 100% with AI! It is also good and has some issues, but those are honestly small. Check it out, it will scare the crap out of you and at the same time totally impress you!
does the free version of Topaz suck for you as well? I tried it and it was so slow and actually froze my computer. First time I think that's ever happened.
I love the comment of “elevating you as a creative” don’t get me wrong, I’m not anti AI, I wouldn’t be here if I was. But the idea that a process like this somehow elevates you as a creative, it’s just doesn’t. That’s just trying to justify somehow what’s actually happening here. As I say, I am not anti AI, I use it in a lot of my own work flows now, it’s why I’m here. But I think people would be arguing (a little ) less if we were a little more honest about what’s happening and what it all means. How is this not replacing people? Of course it is, but we should be focusing on how this helps people who would never have been able to create something close to his, now realise their own ideas. Especially in smaller personal projects. Rather than trying to pretend this doesn’t some how replace all the people it would have taken to produce footage like this a year or so ago. That’s what gets people’s backs up, and rightly so to be honest… (we’ll that and the fact that the creative industries are about to change dramatically and creatives, already under paid, are worried about it… and rightly so. But times change, as they always have, so it was only a matter of time. Anyway that’s another debate for another day… ;) Still an incredibly impressive thing, and the footage looks great.
I love and am grateful you made this video, cheers for this. I do have 1 question... how necessary would you say the Topaz Video Ai is to creating great cinematic videos? You are correct after checking it is a little pricey for me at the moment. Are there any alternatives you'd recommend? and if not, do you implement the Topaz software in your course? Again, LOVE your channel. Can't wait to see more.
Hey there! Thanks for the kindness friend! I would say if quality is your goal, Topaz is 100% necessary, but if you’re just playing around with AI you don’t have to use it. I also cover AI Topaz in the course, but I also provide all the upresed assets so it isn’t 100% necessary.
@@curiousrefuge Thank you Caleb, like so many others have said, I love your work and your demeanor. Looking forward to seeing what you do in the future. Currently saving up for your course. It looks fantastic. I love the work you've showcased on your website. Very inspirational. Cheers again.
would you recommend this for architecture? my firm has been using midjourney to help clients visualize features color schemes and styles early on. I'm trying now to get an understanding if Gen-2 is smart enough to do partial rotations. Where Midjourney is unable to think in 3D, where we can't pivot and rotate, this seems to have some sense of corners. Tell me more!
Yes it does do partial rotations. In fact images with good geometry and leading lines tend to do better in gen-2. But it will still be about a 50/50 shot to get something usable.
@@curiousrefuge I'll take more advise if you have it. 3D models of houses are easy to make look great in SketchUp and Revit, but the land always looks cartoony. any recommended AI tools you have your eyes on that take simple geometry and extrapolates?
Great video question since I’m a UE5 user if anyone render out a video from unreal can I use the AI for a better quality render instead of using path tracing? Please do a tutorial.
This was exactly the video I was looking for. Thank you so much for making and uploading. Great presentation. Oftentimes I tend to skim over or skip parts because the presentation of the workflow is boring. I did not do so with your video. You've gained a subscriber. Well done, friend.
How can i make a consistent character in runway for example i am putting together a movie about a person, how can i use the same character for different role or scene
In our course we teach how to get consistent characters. It largely comes down to the types of prompts you use when generating your images. Feel free to check us out at our website! www.curiousrefuge.com
You could use Runway to generate imagery/video of the food - but not the recipe list itself. That would be more like ChatGPT or even better - your own special recipe! :)
So without a text prompt Gen 2 still changes my initial image into a totally different one. How to I have it actually animate what I put in vs using it as solely a prompt?
Anyone have any advice on what prompts are best for making a character look as if they're speaking> I've tried just about everything I can think of to no avail.
Great video. How much an improvement do you see? It is hard to tell without a side-by-side comparison. It would do good to do a side-by-side comparison of upscaling using Topaz and upscaling being done by Gen-2. Topaz Video AI is $300, so not a trivial cost unless one plans on doing a lot of video processing.
Topaz Video AI has been very useful to upscale and clean up video for me. I prefer the one time cost to the subscription model so many other are using. Topas updates the software often, like every other week, it feels like.
Can you animate public domain images? I’d like to see some examples of animations created from photos of landscapes, water, skies, fog, wind, different weather conditions, fires, avalanches, volcanic eruptions, earthquakes, etc. I’d also like to see animations of cooking/food photos.
Hey , nees some help with runway basic.. i was creating a disposable cup flying video but the texture of the cup changes as the cup move ...any solution?
ℹSINOPSIS: 1. Explica cómo generar VIDs fm IMGs vía Runway Gen-2. - PRO: Puedes generar muchas IMGs (no hay que esperar a que se genere cada vídeo). - CON: Quitar la marca de agua es de pago. 2. Te explica cómo quitar pequeños saltos/cortes de esos VIDs con Topaz Video AI (de pago).
That would be great to have a video about commands that you need to write in RunWay in order to get the best results; thus by video-/film making which I actually do now, the RunWay results are sometimes good/really awesome and sometimes are very bad (problems with "morphing" or "human anatomy" are the biggest and most repetitive ones). For instance, I upload a picture with a human and write a command "eyes moving, lips moving" and the human starts turning into something incredible, changing form etc. And after that I have to either re-write the command with simplier words (but how could it be MORE simple?!....) or upload a similiar picture with less objects so that the algorithm could understand which "eyes" and which "lips" shall be moving.... so that if you could make a video with a description HOW the intellect behind RunWay-algorithm works, it would be great for a better understanding and improvement of video quality. Thanks for reading my short essay ;)
It's true - sometimes it feels like Runway has a mind of it's own. We try our best within our course to give the best tips on how to make these tools work for AI filmmakers. One other thing we would recommend is to check out the Runway social channels where they provide some "Runway Academy" tips. I believe they even have some tips on facial expressions using the motion brush :). We appreciate the comment and will consider making a public video in the future :)
@@curiousrefuge well, I have checked together with Runway Support Team and the answer was that Runway has many possibilities / tools in order to "bring" emotions on characters... need more experimenting🤷♂
And so it begins
what?
’
@@lypsyrobotti4326 it
@@lypsyrobotti4326the beginning of the beginning of the beginninging
@@jameelbarnes3458from beginning
is it really possible now to create professional video using AI?
Thanks man. Had no idea that turning off interpolate and upscale made such a big difference. Currently re-rendering all my shots with this format
Your explanation of adjusting settings such as interpolate and upscale was incredibly helpful in helping me grasp how to achieve optimal results.
Thanks for the kind words! I am so glad it was helpful (:
Right?! So well presented!
Your attention to detail for explanations is much appreciated for beginners like me.
Glad it was helpful!
DUDE😮 Immediate subscribe. Such valuable content, so clearly presented-and loving your non-lazy use of adjectives ! 😃 Thanks! 🙏🏼
Thank you for making the point about "not replacing but elevating." Embrace the tech folks, embrace the tech! ❤
Exactly!
It's incredible! Imagine how games will have graphics like this in a few years, generating everything in real time.
Is... is this a joke?
@@The.Sky.Driver Why would it be? Can't you abstract from its current state and extrapolate its development? Take Wolfenstein 3D from 1982 as a base and extrapolate the advancement of technology to Assassin's Creed, et voilà, it was not a joke, only limited people could have thought it in 1982.
@@The.Sky.DriverIt was just introduced in Nvidia's Event: ua-cam.com/video/3qSQjRaseos/v-deo.html
At the BYD section. It still doesn't run on a usual GPUs, it requires hundreds of Teraflops yet and huge amount of RAM but they did it with a car commercial video, real-time. Give it time, and it will run in your desktop PC.
honestly I prefer the current aspect of video games (like the last of us) it could become very disturbing if it becomes this real
I have the same mindset as merion and so do many researchers (there are papers talking about this possibilities), glory to machines, glory to the Universe.@@The.Sky.Driver
"This is not about replacing you, it's about elevating you..."
Unless you are an actor, set designer, set manufacturer, prop director, motorcycle manufacturer, costume creator, lighting engineer, gaffer, stage manager, makeup artist, boom operator, etc. 😂
But honestly, great video and great explanation of the Topaz settings
This genuinely terrifies me
I doubt it’ll replace those people. It isn’t good enough yet by a long shot. It opens up possibilities for hobbyists who can’t afford to pay for a crew for simple projects though.
@@farmersuiticles It's currently cutting into concept art pretty hard. Quite a few of the folks I know who made a living doing concept for pre-pro and pitches have seen a decent amount of that dry up with clients saying they've swapped to MJ or similar. Other guys are saying they're being handed MJ images and asked to "tweak" for a fraction of the time they would normally work/be paid for. I think costume design isn't too far behind that - set design will take a minute longer just because its so specific to the stage/action that it's hard to find enough data to support design efforts for crew/directors that make sense. What I think is more likely is that audiences start seeing film as a boutique art form - shows like Barbie and Oppenheimer over your average rom-com or comedy fare - and most folks do more of what's occurring now: seeking entertainment from games and creator content online. Source: I'm a set designer and art director in film, tv, and theme parks.
His last line is "...homemaker."
But my condolences to the jobs you mentioned.
It's crazy to see how far we have come. Looking forward to see the next steps!
Completely agree!
I just made one too, if you can take a look, thanks
not "we" just corporate companies who can make videos without any people
ive been playing with runway, blasted through about 500 seconds worth of credits. I wasnt using photorealistic images of people, but there was about 50 or 60% of the videos rendered had really poor results. a lot of collapsing into nothing, or stretching. I've also had some awesome results too. Point is, its a bit of a gamble so if you dont have money to blow on attempts and brainstorming pick your images carefully.
Currently writing a paper on AI, your video was super helpful in breaking down things, and I like your take as well. Thank you.
Can't wait to read it!
"It's not about replacing you, it's about elevating you as a creative". Well said
im not coping im not coping AHHHHHHHHHHHH
what a time to be alive
I remember struggling with the first non-linear editing software, way, way back when I was doing my Masters in Film. In the end we had to give up because it crashed so much! After having a long, long break from ambitious filmmaking, I love that you make this look so easy. I'm excited to start playing with AI!!
Thanks for watching! We're wexcited to see what you create!
This video deserves more attention, all the best insh'Allah (very modest by the way 👊👊👊)
Thank you! Hope it was helpful. :)
I am taking his course and its ABSOLUTELY FANTASTIC!!! 😍😍🥰
Thank you so much! So glad you dig it. ❤️
You guys are the best! Thank you for this great content Caleb!
No problem. I'm glad you dig it!
Some of the outputs I have seen have a dreamlike quality to them. I feel like there could be a style choice to use the footage as is for some situations.
Just wanted to thank you all for these tips. It made creating my first "Movie trailer" a smoother experience 💙
Great to hear!
stumbled across this and must say THANK YOU!...this will unbottle a couple of ideas...#sub
10:13 for a quick story board/prototype to get the idea across its pretty good
Woo,this really burst my head when I saw it
Thanks a lot, very helpful with the settings!
Glad it helped!
A great cinematic peace, love to see a fellow ai video creator work magic
Thank you so much :)
Thanks for the good sideinformations!
You bet!
Next class I intend to be in, for this is the future of things to come
Sounds awesome. Can’t wait to have you in the session. Cheers!
Max quality is the goal, and kudos if you have figured it out, because hours of work using it has looked like garbage, despite the super high quality of the starting images.
This is incredible
Glad you like it! :)
Have you tried processing the video with Runway's super-slow-motion plugin and then speed it up in your video editor? I found that it helps with the choppiness as well (though not with the upscaling) and might be an initial solution for those who just want to play around a bit and get slightly better results than the raw image-to-video output - without paying an extra 300 for Topaz. But yeah, for professional purposes, your process certainly is the way to go.
That’s a great tip. I’d also be concerned with frame rate jitters using that method but would love to see results to compare.
Are you referring to changing shape of the object in Runway Ml? My issue is when i tried to created a video , which shows a huge disposable cup flying.. the texture of the disposable cup changes as it fly... really annoying... by the way, I tried in in the Runway free option... i tried countless time .. did not come right... i even tried Pika .. the quality of Pika is just not right
@@Shhvisualcontent I think you are referring to artefacts of sorts. Sometimes, a random color or shape takes a life of its own and is taken to be an object when it is not. If that is what you are referring to, the above is not related this issue. If you are starting with an image rather than a prompt, it may help to crop the image a bit. Sometimes that is enough for the algorithm to treat the content differently. But it does not always work.
@@andreasfalke2 thanks
I like your attitude. Subscribed.
Great pace, really good teacher.
Thank you very much for this class, hugs from Brazil.
I'll check back in 4-5 years.
Wow! This is insane. Really interesting stuff, great walkthrough 👏
Glad you enjoyed!(:
I just made one too, if you can take a look, thanks
Thank you for sharing!
No problem!
Thank you, this looks simpler than the workflow I was attempting.
Thanks so much! It’s so hard fighting with the blackbox that runway is…. Can’t even comprehend how I didn’t start with all basic settings turned off..
Thanks for watching!
That was awesom - thanks.
Glad you enjoyed it!
Great job. Very professional presentation
Great video. I am wondering how to make a video for instagram reels. I cant find the settings to do this...
You can change the ratio in Midjourney by using the "--ar x:x" like "--ar 16:9" or "--ar 9:16" . That will help get everything started (and don't forget to adjust settings in your video editor too) :)
I tried it and it wasn't so good, but I'll try it again next year 🙂I imagine it will get a lot better with time.
Yeah there are already improvements even since this video came out.
I would use some film grain and little color grading...nice
It’s freaking awesome! When movie!
Thanks! Do you know if there is a way to have for example a 30 second video and then put on tip of the video an IA skin? Im doing a CiFi movie and im trying to do the outside world look futuristic but i want to use real city footage and only put like a “skin” on it. Thank u very much
Yep! You have many options from using Keiber to using your original image as reference to generate from in Midjourney.
@@curiousrefuge do you know of any video about it? And is it only posible with stills images or also video?
One day we will look back and say "wow remember when you could only generate 4 seconds of AI generated frames?!" in the same way we look back at a main external storage device held 2.8 MB of data!
Totally, in 3 months we'll look back and laugh.
Awesome video, very informative
Glad it was helpful!
Can I get this model to run locally,I want pretrained model can I get it? Please reply if someone knows
Not this exact model (as it's runway), but you can look up ComfyUI to get started. You will need a powerful machine.
good use of the tool. How did you keep is from warping the image, and do you have to pay for a sub to get more than 4 seconds?
that's brilliant, man
Thank you for your video. I am just getting into this. Why do you suggest not adding a description to your image, i.e telling the AI in runway what you want it to do? How do i give it somewhat specific commands/suggestions to an existing image?
We go through all of this in our course. However, description are vital because the images are based on your description (prompt). This is the mechanism behind nearly all generative AI tools :)
I've been using Runway for the past month or two now. Loving what I am capable of doing with it but do you happen to have any info on Image + Description?
Some shots I get that come out super good and others just poop. Which sucks because as mentioned these renders are not free. I probably spend roughly $20-$30 per video I've done (which honestly is super cheap when you think about it) but yeah I've been having some issues on my recent project that I'm currently working on and with Runway short film contest coming up this weekend, I figured id dive in and see what other users out there could potential provide some helpful insight. if you have any food for thought please feel free to respond.
That all said great video breakdown and I hope more is to come!
Cheers
Great stuff to be honest 😊
Thanks so much!
Do you know how can i find all the generated videos and images that i've done with Runway? i only can found the ones that i downloaded
Good question - we'll check! How far back were some of these images?
Thanks for this! Is the process the same now with the new version or do you also have a recent video?
The process is the largely the same, but please check out all of our new videos to see how it has all advanced since here!
i wish not interpolating in the gen 2 generation reduced the compute costs. currently I think it costs you the same whether you interpolate or not, which can't be right. anyway, great vid.
Not long before we see a whole movie.
There is already a South Park episode created 100% with AI! It is also good and has some issues, but those are honestly small. Check it out, it will scare the crap out of you and at the same time totally impress you!
Actually blown away wow
And this was an old demo too!
does the free version of Topaz suck for you as well? I tried it and it was so slow and actually froze my computer. First time I think that's ever happened.
I love the comment of “elevating you as a creative” don’t get me wrong, I’m not anti AI, I wouldn’t be here if I was. But the idea that a process like this somehow elevates you as a creative, it’s just doesn’t. That’s just trying to justify somehow what’s actually happening here.
As I say, I am not anti AI, I use it in a lot of my own work flows now, it’s why I’m here. But I think people would be arguing (a little ) less if we were a little more honest about what’s happening and what it all means.
How is this not replacing people? Of course it is, but we should be focusing on how this helps people who would never have been able to create something close to his, now realise their own ideas. Especially in smaller personal projects.
Rather than trying to pretend this doesn’t some how replace all the people it would have taken to produce footage like this a year or so ago. That’s what gets people’s backs up, and rightly so to be honest… (we’ll that and the fact that the creative industries are about to change dramatically and creatives, already under paid, are worried about it… and rightly so. But times change, as they always have, so it was only a matter of time. Anyway that’s another debate for another day… ;)
Still an incredibly impressive thing, and the footage looks great.
Thanks for the video
Our pleasure!
Uau you're going very good 👍 Thanks, obrigado and shukran شكراً
Thank you! 😃 Glad you enjoy it!
I love and am grateful you made this video, cheers for this. I do have 1 question... how necessary would you say the Topaz Video Ai is to creating great cinematic videos? You are correct after checking it is a little pricey for me at the moment. Are there any alternatives you'd recommend? and if not, do you implement the Topaz software in your course? Again, LOVE your channel. Can't wait to see more.
Hey there! Thanks for the kindness friend! I would say if quality is your goal, Topaz is 100% necessary, but if you’re just playing around with AI you don’t have to use it. I also cover AI Topaz in the course, but I also provide all the upresed assets so it isn’t 100% necessary.
@@curiousrefuge Thank you Caleb, like so many others have said, I love your work and your demeanor. Looking forward to seeing what you do in the future. Currently saving up for your course. It looks fantastic. I love the work you've showcased on your website. Very inspirational. Cheers again.
In the end, the key is how to make the story interesting.
But "runway gen2" is the best tool.
True, storytelling is the most important aspect of any piece of content!
How did you make those several photos in Midjourney using the same theme and palette?
Man, I can't get anything but weird, blobby stuff. Partially coherent.
Gen3 is now out and should help!
This was very helpful
Glad you enjoyed it(:
would you recommend this for architecture? my firm has been using midjourney to help clients visualize features color schemes and styles early on. I'm trying now to get an understanding if Gen-2 is smart enough to do partial rotations. Where Midjourney is unable to think in 3D, where we can't pivot and rotate, this seems to have some sense of corners. Tell me more!
Yes it does do partial rotations. In fact images with good geometry and leading lines tend to do better in gen-2. But it will still be about a 50/50 shot to get something usable.
But I would recommend using it.
@@curiousrefuge I'll take more advise if you have it. 3D models of houses are easy to make look great in SketchUp and Revit, but the land always looks cartoony. any recommended AI tools you have your eyes on that take simple geometry and extrapolates?
You are like a young Christoph Waltz, my AI generated friend!
Great video question since I’m a UE5 user if anyone render out a video from unreal can I use the AI for a better quality render instead of using path tracing? Please do a tutorial.
excellent tutorial by christoph waltz
Haha I get that all the time.
This is why actors are striking lol
This was exactly the video I was looking for. Thank you so much for making and uploading. Great presentation. Oftentimes I tend to skim over or skip parts because the presentation of the workflow is boring. I did not do so with your video. You've gained a subscriber. Well done, friend.
Glad it was helpful! Glad to have you apart of the community(:
Waiting for the Gen 3. 😏😏😏😏😏😏😄😄😄😄😄
Great sharing.❤
Whats your experience with the seed?
How can i make a consistent character in runway for example i am putting together a movie about a person, how can i use the same character for different role or scene
In our course we teach how to get consistent characters. It largely comes down to the types of prompts you use when generating your images. Feel free to check us out at our website! www.curiousrefuge.com
Thank you for all of the information/explanation.
Can I actually write a script, and make an animated video with dialogues using this? Or combination of other AI tools???
I would like to make cooking recipes, is it possible to use Runway?
You could use Runway to generate imagery/video of the food - but not the recipe list itself. That would be more like ChatGPT or even better - your own special recipe! :)
So without a text prompt Gen 2 still changes my initial image into a totally different one. How to I have it actually animate what I put in vs using it as solely a prompt?
Hi all do anyone know which kind of music is this ? Can you please tag me ?
Hey! We use artlist.io to source music for the majority of our videos!
Anyone have any advice on what prompts are best for making a character look as if they're speaking> I've tried just about everything I can think of to no avail.
There are lipsync apps that specifically warps the mouth to fit audio.
Is it best to upload a photo with a solid background, even green screen?
Thanks for all 💎👍👍👍🔔💓
Great video. How much an improvement do you see? It is hard to tell without a side-by-side comparison. It would do good to do a side-by-side comparison of upscaling using Topaz and upscaling being done by Gen-2.
Topaz Video AI is $300, so not a trivial cost unless one plans on doing a lot of video processing.
Solution: Press 5:13 & 9:51
It's on sale for $249 until August 4th
Topaz Video AI has been very useful to upscale and clean up video for me. I prefer the one time cost to the subscription model so many other are using. Topas updates the software often, like every other week, it feels like.
this is amazing
Can you animate public domain images? I’d like to see some examples of animations created from photos of landscapes, water, skies, fog, wind, different weather conditions, fires, avalanches, volcanic eruptions, earthquakes, etc. I’d also like to see animations of cooking/food photos.
It is going to get very interesting very quickly!
Easy to follow, cuts to the chase - /thank you!
Our pleasure!
wow..this is super awesome...imagine a one-man project that usually needs hundreds of manpower..this is a great tool for newbies.
Thank you very much! So glad it was helpful(:
Hey , nees some help with runway basic.. i was creating a disposable cup flying video but the texture of the cup changes as the cup move ...any solution?
This is very common, you may need to dial down your motion strength!
@@curiousrefuge what do you mean? I did not get you..thank you @curiousrefuge
@@curiousrefuge you mean slow motion? Please let me know ... would be a great help 🥹
@@curiousrefuge can you please let me know ..whether you mean using slow motion? 🥹
How do you get consistent source frames for each shot?
Awesome!!!!
Thanks.
You're welcome!
How you can add voices ?
There's a few different options... D-ID and heygen do pretty well.
whats your text style on 0:24?
The font?
yes please@@curiousrefuge
I guess its eastman style@@curiousrefuge
Can you create videos longer than 4s?
Yes! There are tricks to extend videos but now Runway has a native video extender tool :)
hey can we make action scenes withh this ?? i tried but they dont show any action clip.
It's difficult but it is possible!
Actors and Writers: We're gonna strike!
Studios: Laughs in AI
ℹSINOPSIS:
1. Explica cómo generar VIDs fm IMGs vía Runway Gen-2.
- PRO: Puedes generar muchas IMGs (no hay que esperar a que se genere cada vídeo).
- CON: Quitar la marca de agua es de pago.
2. Te explica cómo quitar pequeños saltos/cortes de esos VIDs con Topaz Video AI (de pago).
What speed did you run it at?
That would be great to have a video about commands that you need to write in RunWay in order to get the best results; thus by video-/film making which I actually do now, the RunWay results are sometimes good/really awesome and sometimes are very bad (problems with "morphing" or "human anatomy" are the biggest and most repetitive ones). For instance, I upload a picture with a human and write a command "eyes moving, lips moving" and the human starts turning into something incredible, changing form etc. And after that I have to either re-write the command with simplier words (but how could it be MORE simple?!....) or upload a similiar picture with less objects so that the algorithm could understand which "eyes" and which "lips" shall be moving.... so that if you could make a video with a description HOW the intellect behind RunWay-algorithm works, it would be great for a better understanding and improvement of video quality. Thanks for reading my short essay ;)
It's true - sometimes it feels like Runway has a mind of it's own. We try our best within our course to give the best tips on how to make these tools work for AI filmmakers. One other thing we would recommend is to check out the Runway social channels where they provide some "Runway Academy" tips. I believe they even have some tips on facial expressions using the motion brush :). We appreciate the comment and will consider making a public video in the future :)
@@curiousrefuge well, I have checked together with Runway Support Team and the answer was that Runway has many possibilities / tools in order to "bring" emotions on characters... need more experimenting🤷♂