Runway Gen-2 Ultimate Tutorial : Everything You Need To Know!

Поділитися
Вставка
  • Опубліковано 26 вер 2024

КОМЕНТАРІ • 107

  • @AIDreamsbySkylar
    @AIDreamsbySkylar Рік тому +9

    Just got their Ultimate plan...yes, $100 a month...but I think it'll be well worth it. Always made some cool looking videos and can't wait to create more with Gen2! Great tutorial.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      Thank you! I did also do a video featuring the new update here: ua-cam.com/video/k5CC_vg4Jqo/v-deo.html That will be $100 well worth spent I think!

    • @Pccu16
      @Pccu16 9 місяців тому

      Please how can i make a consistent character in runway..Like i already generated a character of a man, so i want to use the face of the character in other scene of the movie.

  • @richardglady3009
    @richardglady3009 Рік тому +7

    Another great video. Thank you. Thanks for showing not just your prompts, but the layout for us to use.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      100% Thank you for watching! And yeah, I always like to showcase the logic behind a prompt. Teach a man to fish, and all that...
      Also, I just realized I'm hungry...

  • @ATLJB86
    @ATLJB86 Рік тому +23

    I’m getting it to produce exactly what I want by first creating an image and uploading it. I use the same prompt for the video as I did for the image. It is adding in my light reflection, ray tracing, and all of that. It listens to my camera angle and pan/zoom settings but my issue is the duration and quality.

    • @5timesm
      @5timesm Рік тому +1

      „creating your video first“… where?

    • @Sandeep_Sengupta
      @Sandeep_Sengupta Рік тому +2

      @@5timesm He said image, not video

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +3

      Yup. I think consistency in prompting helps. Duration will come-- Gen-1 started at 4 seconds before getting bumped up 15. And quality will follow suit. I actually think there have been improvements to Gen-2 since the early Beta. It actually looks a lot better to me already.
      (There was a rumor that the Web Version has been upgraded-- no idea if that is true, but there does seem to be a slightly cleaner look)

    • @Rickfort67
      @Rickfort67 Рік тому

      Do you mind sharing a prompt example you used? I’m having trouble getting camera movement and fast character movement

  • @andythefork
    @andythefork Рік тому +1

    Appreciate you getting this up today. Why is today my busiest work day, all I want to do is play with the new toy!

  • @TheGhostlyStranger
    @TheGhostlyStranger Рік тому +1

    I started my journey creating AI content. I've had some fun using pan and zoom videos so far with midjourney, but will eventually try text to video if I can. Thanks for your videos. Take care!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      It's a great time right now! I'm going to do a Gen-2 follow up next week. In the meantime, you might want to check out the video I did on Pika: ua-cam.com/video/uLpuuRteU7Y/v-deo.html

    • @TheGhostlyStranger
      @TheGhostlyStranger Рік тому

      @@TheoreticallyMedia signed up for the waitlist

  • @carlosophia
    @carlosophia Рік тому +3

    Hello, Tim! I really like your videos, and I'd suggest you consider creating one guiding us through a more involved story telling for an ultra-short -- are you up for 1 min ?? :o) I have just started with Runway G2 as I'm still trying to run Automatic 1111 locally, and that is hell. Anyway, are you getting "vanishing in and out" problems when objects move in the same zone? I didn't notice that in the NYC scene but I didn't watch it closely enough. A typical prompt for you to get in trouble would be "cars moving in a busy street" ... just fill the rest as you want. If you figure out a solution, please do a video. I'll leave a note if I get any ideas about that. Tks for the tip about "styling" with "fix seeds". I'm hopeful in terms of where this goes!!

  • @dmreturns6485
    @dmreturns6485 Рік тому +1

    Quick turn around on this video :P Well done.
    Good info. Thanks.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +2

      Ha! Thank you! I wasn't actually planning on doing a video today, but some afternoon work cleared up and I was like...ehhhhh...I want to be lazy, but...Gen-2 is so much fun!

  • @ripcurrentstudio
    @ripcurrentstudio 8 місяців тому +1

    🎯 Key Takeaways for quick navigation:
    00:00 🎥 *Introduction to Gen 2*
    - Overview and tutorial of Runway ML's Gen-2.
    - Web UI version and differences from Discord UI.
    - Introduction to the key interface elements.
    01:22 📝 *Writing Effective Prompts*
    - Formula for writing effective prompts: style, shot, subject, action, setting, and lighting.
    - Tips for using keywords in prompts.
    - Importance of keeping character descriptions simple.
    03:02 🌟 *Prompting and Output Examples*
    - Demonstration of prompts and their results.
    - How locking a seed influences consistency.
    - Exploring different prompts and their outcomes.
    06:11 🖼️ *Using Reference Images*
    - Experimenting with reference images in prompts.
    - Challenges and adjustments needed for specific actions.
    - Incorporating reference images as storyboards.
    08:15 📈 *Upscaling and Output Quality*
    - Comparing image quality between free and upscale versions.
    - Benefits of upscaling for higher resolution outputs.
    - Mentioning differences between Discord and web-based versions.
    10:34 🧑‍💻 *Additional Resources and Patreon*
    - Announcement of Patreon for community support.
    - Encouragement to join a smaller community for collaboration.
    - Future plans for the Patreon community.
    Made with HARPA AI

  • @Dave102693
    @Dave102693 Рік тому +1

    Thanks for this video so much!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      oh excellent!! So happy this was helpful to you! updated version here: ua-cam.com/video/k5CC_vg4Jqo/v-deo.html

  • @sanjeevp
    @sanjeevp Рік тому +1

    Syntax of prompts at mark 1:34.
    Style
    shot
    subject
    action
    setting
    lighting.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      Correct. I should have put together a PDF for it. Maybe I’ll do that later and add it to the pinned.

  • @to6941
    @to6941 7 місяців тому +1

    Good video however anyone unfamiliar with discord and what it does may get a little lost, for instance to make a reference to using runway on discord however when I looked up discord it didn’t give me any info on this and just appeared to be a platform for creative communities to chat and discuss work. Anyway I just don’t have enough knowledge to utilise whatever discord is offering

    • @TheoreticallyMedia
      @TheoreticallyMedia  7 місяців тому

      So, that is a pretty old video now (at least in AI terms!), and Runway is no longer on discord, but rather is a fully fledged website that you can find at runwayml.com/
      I haven't actually done a full tutoral on the site (I should probably do that), but I do cover a lot of the new features, like the Motion Brush: ua-cam.com/video/zVO16lU3AQ4/v-deo.html
      A lot of interesting stuff has improved since this video! I think you'll be quite pleased!

  • @suzevidz
    @suzevidz Рік тому +1

    Thanks!

  • @robertovitale6719
    @robertovitale6719 6 місяців тому

    Thx for the vid...very helpful insights :)

  • @DrFrankenskippy
    @DrFrankenskippy 9 місяців тому

    what I find frustrating with my experiments is the lack of precision on placement when it comes to prompts e.g. if you want an element of chr in your scene to be on fire, it will add the fire quite randomly to the surrounding area not tight to any given region you try to describe

  • @BuckBowen
    @BuckBowen Рік тому +1

    Thank you.

  • @leanderjackiegrogan
    @leanderjackiegrogan 9 місяців тому

    Thank you. I am a subscriber. My question is does Runway Gen offer a place for "Negative" prompts ... I don't want any morphing into a whole new person? I want to keep the same character throughout the video.Thanks

  • @jojohn103
    @jojohn103 Рік тому +1

    I have a doubt .can we add dialogues via prompt and get an output like actors are talking/ kind of lip sync

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      So, not quite but ALMOST, I did a pretty insane workflow in this video- it works, maybe not photorealistic, but you can see that it’ll get there: Create Awesome AI Animation with this Workflow: Murf.AI, Midjourney, Gen-2, Kaiber.AI & More
      ua-cam.com/video/eUrtX432KUI/v-deo.html

  • @rachelc212
    @rachelc212 Рік тому +1

    Great content! I just subscribed to your channel and turned notifications. So I can make videos or shorts with my script or text and Gen2 creates the video itself, correct? If I don't need to find copyright-free photos and I don’t need to take pics to put together to make/edit a video which I've never done it before, I won't wait to pay right away through your affiliate link.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      Awesome! thank you for the sub! Ummm, pop over to the Discord here: discord.gg/6PMENW3k and drop me a line in Project Help. I think that'll be a lot easier than chatting through comments. I have some thoughts for you...

  • @AG_before
    @AG_before Рік тому +1

    No kung fu? This is ridiculous! Ha!
    Thank you for the video. 👍 I hopped on there the minute I got the email. I can't wait for 6 months from now.
    Appreciate your videos as always.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      Haha, believe me, I TRIED for Kung Fu! And awhile back I tried to do a Samurai Jack animated sword fight, but no go there as well.
      But, I think that we'll get there soon. I have a sneaking suspicion that Gen-1 footage is being used to train Gen-2-- so, I'm going to start rolling up some Jackie Chan movies now!

  • @voyage-digital
    @voyage-digital Рік тому +1

    Thank you very much, things are immediately better with your method 👍. Is there still an advantage to going through Discord or is it the same now?

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      excellent to hear! There's also an updated version here: ua-cam.com/video/k5CC_vg4Jqo/v-deo.html
      I still need to check the Discord generator, but I think that's mostly dead at this point. By the way, have you checked out Pika? I've been having a LOT of fun with that, and it is free: ua-cam.com/video/0NRT7K3YkPI/v-deo.html

    • @voyage-digital
      @voyage-digital Рік тому

      @@TheoreticallyMedia I'm looking at pika, it's not bad at all 👌😱

  • @CG_Spykid
    @CG_Spykid Рік тому +2

    Thank you very much! Although I am sad that we can only generate a video of 4 seconds

    • @BassmeantProductions
      @BassmeantProductions Рік тому +1

      Free version is 4 second. Other version is 15.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      The 15 second one is just Gen-1, I think. But-- it won't be long until that cap is lifted. They already have a slider built in.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      For now. Give it a month or so. One trick I was using was to slow the footage down in a video editor...its a good trick if you need a longer shot.

    • @vp62ift
      @vp62ift Рік тому

      @@BassmeantProductions paid version is 4 seconds , hopefully they change that soon

  • @PeterGiblin
    @PeterGiblin Рік тому +1

    So I'm trying to animate an image from Midjourney but Runway just takes it as a reference rather than simply animating the image I'm happy with when I give it a prompt. Is there a way to stop it from generating new imagery and just take the image as is and use the text prompt to inform the action?

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      Check out the latest video, I think you'll be quite interested: ua-cam.com/video/k5CC_vg4Jqo/v-deo.html Short answer: You can!

  • @zemli4603
    @zemli4603 Рік тому

    I love the effect on the cities examples. It’s looking choppy. How did you achieve that?

  • @purplepink5630
    @purplepink5630 Рік тому +1

    Hmmm interesting. I feel the more expensive Kaiber results were better especially for the skater one you finished your previous video with. Hmm 🤔 not sure if it's too early to do Gen 2 yet until it's a bit better in results

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      To be honest. I think the current best “look” for straight AI video is running Gen-2 through Kaiber. I did a video about that and I really liked the look.
      Personally i feel like there are interesting things you can do with text to video, but the really interesting part is video to video. I’ve seen some interesting results lately out of Gen1, and I’d love to take that on a head to head with Kaiber.
      As far as the super trippy stuff? Can’t beat Kaiber on that! It looks so good!

    • @purplepink5630
      @purplepink5630 Рік тому

      @@TheoreticallyMedia nice insights friend... That would be a great video to see how current Gen 1 fares against Kaiber in the text to video realm. I seriously looked into kaiber last night and am heavily seeing it's worth especially as it improves... It's like MJ vs Leo... And Gen 2 is still unavailable in mass so it would be intriguing if we could see the pros of when to use Gen 1 in the meantime. And then, there's that part of me that looks at the editing/workflow time investment and ponder just using unique stock videos and strategic transitions in filmora 11 and no funky hands/eyes lol. Also... Your thoughts on D-iD for Avatar animation vs what you had to do here for animating? It's quite expensive for commercial use tho. Thanks for your efforts and comprehensive tutorials. ☺️

  • @TheBlackClockOfTime
    @TheBlackClockOfTime Рік тому +2

    Goddamn this shit's going to get out of hand real soon. June 2024 we'll look back at this moment with endearment of "remember when AI videos we're yet absolutely perfect and you couldn't make full movies with a short sentence?"

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      I know right? I keep thinking that the whole "wonky" look is going to be looked back on with nostalgia at some point. Like, someone will make a movie in 2040 they'll have a flashback scene to 2023, and everything will look like Gen-2!
      Actually, that's an awesome idea. Remind me in 2040!!

  • @romance27
    @romance27 9 місяців тому

    I jeed to kearn how to make the characters move . I have hyoer realistic characters for a music bideo real i am trying to create , but its hard to give the characters human like mkvmenet , blinking , walking , smiling etc . Any tips, thank you guys

  • @splicegraph
    @splicegraph Рік тому +1

    my first attempt was a travasty, tried doing a 16mm 1980s family home movie... don't know what those kids were doing but that's not how you eat icecream! Second attemp I used image reference from MJv5.1 renders and a similar promt and got very near exact results. The hands anatomy is messed up like old MJ. The video quality is terrible, but hey, this is very promising.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      totally. And agreed-- I think the overall video quality is around MJ v2? But...I mean, it won't be long until we're at v4 video and beyond.
      Currently, I think of Gen2 as a fun exploratory tool, where you can make some fun short films. 2 years from now? The mind boggles.

  • @stijill
    @stijill 7 місяців тому +1

    Does the Color Wheel on corner get removed I get the upgrade

    • @TheoreticallyMedia
      @TheoreticallyMedia  7 місяців тому +1

      It does!

    • @stijill
      @stijill 7 місяців тому

      Posting before thinking never works for me. I forgot I can read.

  • @ATLJB86
    @ATLJB86 Рік тому +3

    I found a work-around for the low 4 seconds you get. I just create a new Davinci Resolve project and set it to 16 fps. Then important the video and activate retime control and drop it to 50% My work-around for low quality is to import the video into Davinci Resolve then export each frame as a .png then upscale it with chaiNNer. Put the upscaled .png files back in Davinci then create the video.. Problem is, that’s a lot of work… Not worth payment at this this IMO.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +2

      That's brilliant! But yeah, a lot of work too. Sometime over the weekend I'm going to plug some Gen-2 footage into Topaz and upscale it to see what happens.

    • @user-pc7ef5sb6x
      @user-pc7ef5sb6x Рік тому +2

      You can also pass it though Stable Diffusion upscaler with denoising strength below 50.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      @@user-pc7ef5sb6x taking Gen-2 footage and then post processing it with other tools is what I’m really interested in right now. Yup, upscale it and stabilize, then hand it to an after effects wizard to see what they can do!
      (I am an AE Caveman, sadly)

    • @ATLJB86
      @ATLJB86 Рік тому

      @@user-pc7ef5sb6x prob is, I’m making a short film and that would take forever. But yes, many options.

  • @imperator21
    @imperator21 11 місяців тому +1

    Did Gen 2 changed their model or something? Remember these weird A.I. commercials like pizza nuggets? I'm not getting any of these results. 😮

    • @TheoreticallyMedia
      @TheoreticallyMedia  11 місяців тому

      Yeah. They’ve slowly updated their model and now a lot of those super surreal results have been slowly vanishing.
      I said it a lot back then: we were eventually going to hit a point where the weird was hammered out, and I was going to miss it.
      Looks like we’re getting there now.

  • @borrowedtruths6955
    @borrowedtruths6955 Рік тому +1

    Any ideas on what to type to make the characters look as if they're speaking? I've tried "man talking," woman speaking," people having a conversation," and can't get their mouths to move.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      Yeah, that's a tough one-- You can try the new update they just pushed out by uploading a reference image and it seems like they start "talking" then. I think in general, AI actors start to motormouth if they try to talk. I've seen some WEIRD outputs with speaking AI characters.

    • @borrowedtruths6955
      @borrowedtruths6955 Рік тому

      @@TheoreticallyMedia Appreciate the answer, I'll keep trying. Thanks again.

  • @maxkitaev3524
    @maxkitaev3524 Рік тому

    all my generations are slow. is there a way to speed em up? adding 'fast' doesn't help

  • @eri7-11
    @eri7-11 Рік тому +1

    omg you look like the James Bond character!!!!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      Haha, maybe one of the villains! I also look super silly in a Tuxedo! (or at least feel super silly!)

  • @AuthoredProject
    @AuthoredProject Рік тому +1

    I saw a post where a person sseemed to use Gen2 to edit their own videos. But I don't see a way to do that. Is that possible yet?

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      I don’t think so? Runway has an online video editor as a separate product, so maybe they switched over to that?
      It’d be smart if Runway’s video editor automatically has access to your Gen2 video, but I don’t think that’s happened. Then again, can’t fully say since I don’t use it.
      Are you looking for video editing software?

    • @MagdalenRose
      @MagdalenRose Рік тому

      @@TheoreticallyMedia (my bad other channel reply lol) I’ve been using the desktop version. But maybe he was using Gen 1? It seemed way too good for Gen 1 so I assumed it was 2.

  • @bbmm996
    @bbmm996 Рік тому +1

    can you change the aspect ratio?

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      Not yet, as far as I know, but I’m sure they’re working on vertical. Seems like a no brainer for IG/TT and YTShorts

  • @GaryJr530
    @GaryJr530 Рік тому +1

    hey dude on the video..guess what?

  • @choppergirl
    @choppergirl 9 місяців тому +1

    Gen-2 is awful
    Most of what it churns out based on the image you prompt it with... turn out to be total grrbage.
    Maybe 1 out of 20 results are usuable for anything at all.
    I'm canceling after my first month runs out. It's not useful for anything at this stage. I even trained a model with 15 images and it won't use the model when I put it in the prompt.

    • @TheoreticallyMedia
      @TheoreticallyMedia  9 місяців тому

      Are you on the Pika waitlist? You may want to try that out. I put up a video on the 1.0 update today. I think you might like it.
      The models all do different things, I still say the real gem of Runway is Gen1. That’s the thing everyone is sleeping on

    • @choppergirl
      @choppergirl 9 місяців тому

      @@TheoreticallyMedia Rolls eyes... what can any of this grrbage be used for... I don't know.. it can't even get a simple pan right... lol
      ua-cam.com/video/qFC0qdAUTgU/v-deo.html

  • @boyboy168
    @boyboy168 Рік тому +1

    just wasted time. The generated video makes my pretty girl images very ugly! Disappointed to this app.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +1

      Yeah, its early in. It'll improve as time goes on. Your pretty girl will one day shine in full-motion!

  • @NuvoVision
    @NuvoVision Рік тому +1

    set lighting is a tad low bud....let's see that movie star face!😬

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      Ha! Shall do! I’m always aiming for that “moody movie lighting” but, you’re totally right, this is YT.
      I’m not going to use the stupid ring light though. I hate that circular reflection as a eye catch light. It kinda creeps me out!

    • @NuvoVision
      @NuvoVision Рік тому

      @TheoreticallyMedia That's fair....and I agree moody/cinematic is great.. But its best to start with too much light and bring. it down in post....can still get the look but with more detail. Get some light on a least half if ya for a shadow....will look great. Take care👊

  • @nahiddotai
    @nahiddotai Рік тому +11

    You know what's crazy? I was playing with Gen 2 this morning by uploading an AI avatar of myself in Pixar style to Gen 2 as an image prompt, and I luckily used a very similar prompt structure to the one your mentioned - style, shot, subject, action, setting. I didn't include the lighting though, and this was my first ever attempt at using Gen 2 and I was actually impressed with the result. I mean it had deformed hands and all, but was generally happy with Gen 2 gave me. I compare this to early versions of Midjourney - I bet in a year (or less) we're going to get some pretty fantastic results. Thanks again for your awesome video and tips ❤

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому +2

      excellent to hear! And although I didn't really go over it here, I do think Gen2 does particularly well with animated looks. I think our brains accept the video a little better when you have a stylized look, as opposed to the "rubber people" that are fairly common with Gen2 in its current state.
      I don't know if you caught it, but I had a pretty fun workflow with taking Gen2's output and popping it into Kaiber (plus lip sync for dialouge!): ua-cam.com/video/eUrtX432KUI/v-deo.html

    • @AG_before
      @AG_before Рік тому +1

      Agree! 💯 A year from now will be 🤯 And we're discussing making videos from words. Ridiculous!

    • @nahiddotai
      @nahiddotai Рік тому

      @@TheoreticallyMedia I've got it saved to watch asap, after watching this video. Not sure how I missed it earlier!

    • @jojohn103
      @jojohn103 Рік тому

      I have a doubt .can we add dialogues via prompt and get an output like actors are talking/ kind of lip sync

  • @schumanncombo
    @schumanncombo Рік тому +1

    anyone know how to create video loops with this , what can "seamless" in time ?

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      There’s a trick where if you screenshot your last frame of video and then feed it back in as an image prompt, you’ll get a continuation. But, the video quality will degrade.
      Unless you were talking about a boomerang type effect?

  • @SelfcareSofie
    @SelfcareSofie 11 місяців тому +1

    Are there ways of combining elements from these generated videos with actual films? Would masking work? Essentially turning video parts into pngs and pasting them onto others, then editing lighting as though it is all part of the same video.

    • @TheoreticallyMedia
      @TheoreticallyMedia  11 місяців тому

      Oh, 100%. You’d be in the land of After Effects, or some other compositing software, but 100% doable, and I think the results would be amazing.

    • @romance27
      @romance27 9 місяців тому

      Is this the only way to bring alive my characters? Make them walk , smile? For a music video reel of about 30 seconds

  • @bySterling
    @bySterling Рік тому +1

    Awesome vid great how much smoother the vids are now 🎉

    • @TheoreticallyMedia
      @TheoreticallyMedia  Рік тому

      It's crazy how much better it is getting. Did you see the video I did on Kaiber? In that one I used a source video I previously used for a Gen-1 test. I mean, obviously Kaiber blew it out of the water-- but I wanted to check to see when I posted that original video.
      It was 4 months ago. Gen-1 was four months ago.
      I...was shocked. It feels like that was an eon ago.

  • @VideoNash
    @VideoNash 10 місяців тому

    thanks