This Will Change Animation Forever. NEW Gen1 AI Animation Tutorial.

Поділитися
Вставка
  • Опубліковано 15 лип 2024
  • -Mom, can we have Corridor Crew? -We got Corridor Crew at home son.
    One hour. That's all you need for cool AI animations. I'll show you Runway's new Gen1 and how to create cool looking AI animations with it.
    research.runwayml.com/gen1
    FREE Prompt styles here:
    / sebs-hilis-79649068
    Flowframes:
    nmkd.itch.io/flowframes
    Support me on Patreon to get access to unique perks! / sebastiankamph
    Chat with me in our community discord: / discord
    Control Lights in Stable Diffusion
    • Control Light in AI Im...
    LIVE Pose in Stable Diffusion
    • LIVE Pose in Stable Di...
    My workflow to Perfect Images
    • Revealing my Workflow ...
    ControlNet tutorial and install guide
    • NEW ControlNet for Sta...
    Ultimate Stable diffusion guide
    • Stable diffusion tutor...
    The Rise of AI Art: A Creative Revolution
    • The Rise of AI Art - A...
    7 Secrets to writing with ChatGPT (Don't tell your boss!)
    • 7 Secrets in ChatGPT (...
    Ultimate Animation guide in Stable diffusion
    • Stable diffusion anima...
    Dreambooth tutorial for Stable diffusion
    • Dreambooth tutorial fo...
    5 tricks you're not using
    • Top 5 Stable diffusion...
    Avoid these 7 mistakes
    • Don't make these 7 mis...
    How to ChatGPT. ChatGPT explained:
    • How to ChatGPT? Chat G...
    How to fix live render preview:
    • Stable diffusion gui m...

КОМЕНТАРІ • 127

  • @optimoos
    @optimoos Рік тому +3

    wow. game changer! thanks for keeping us in the loop with the latest, Sebastian!

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      You bet! Glad to have you around. Keep working on those animations 🌟

  • @ixiTimmyixi
    @ixiTimmyixi Рік тому +15

    Processing foreground and background seperatley will work even better. ( i haven't finished the video yet, so you may have already done that lol)

    • @sebastiankamph
      @sebastiankamph  Рік тому +4

      That surely sounds like the best way. I need to test it further to see if it can be improved.

    • @ixiTimmyixi
      @ixiTimmyixi Рік тому +2

      @@sebastiankamph batch processing your masks is insanely powerful. You can work with 4k video and retain the resolution without upscaling. Give the AI much more pixels to work with and providing impressive amounts of detail. Let me know if you need help. I can't find time to make a guide. You can if you want

  • @ixiTimmyixi
    @ixiTimmyixi Рік тому +11

    I was just watching another UA-camr play with Gen1 and I kept saying to my wife "why isn't anyone styling Frames from the input video? That would give much more consistent results" lol. Leave it to Sebastian. Thanks for sharing, I can't wait to get access for myself. I wonder if they're using the Alt IMG2IMG method for heir processing. It's a HUGE game changer for SD animations

    • @sebastiankamph
      @sebastiankamph  Рік тому +5

      Hah, great minds think alike. I first tried to style each video input as well, having 5 style frames. But I didn't notice much improvement in the little testing I did.

  • @skattyopt
    @skattyopt Рік тому +4

    I have just got access to runway cant wait to have a go so this video has come at the right time for me.

  • @DannyRndm
    @DannyRndm Рік тому +1

    1:53 no need to screenshot. You can export the frame as a jpg or png file.
    In the Program Monitor, click the Export Frame button on the lower right.
    Click to view larger image
    In the Export Frame dialog, choose the desired filename, still-image format, and path, clicking the Browse button to open the Browse for Folder dialog.
    NOTE ... In Windows, you can export to the BMP, DPX, GIF, JPEG, PNG, TGA, and TIFF formats. On the Mac, you can export to the DPX, JPG, PNG, TGA, and TIFF formats.
    Click OK to export the frame.
    Great content. Keep up the good work!

  • @Gerard_V
    @Gerard_V Рік тому

    How cool! let's see if I have access soon! I would like to try it!

  • @blackvx
    @blackvx Рік тому

    Amazing work! The ski slope is the best. Thanks!

  • @ShawnFumo
    @ShawnFumo Рік тому +3

    Clever way around the limit! Another option might be to leave a bit of overlap in the videos sent to Gen-1 and then crossfade them.

  • @TheAiConqueror
    @TheAiConqueror Рік тому

    Really cool workflow! 😬👍

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Thanks! Just waiting for you to make something cool now once you get in 🌟

  • @Modioman69
    @Modioman69 Рік тому +4

    Nice job bypassing the current limitations with a nice work flow. I can foresee background renders composited with green screen actors each being separately passed through the Gen-1 as a common workflow once consistency is better. Might not even need the green screen if the AI can ever differentiate between subjects real-time.
    Now what is really going to blow people’s minds is when Imagen text to video gets fully ironed out. Then the sky is the limit for content creation. Make your own movies in any style, everyone then uploads to a Imagenhub full of user content. I’m just spitballing but time will tell. Keep up the great content Sebastian. 👍

    • @sebastiankamph
      @sebastiankamph  Рік тому +2

      I think you're on to something there. User generated content will be the future more than it is today. Thank you! 😊🌟

  • @AvantiMusicNJ
    @AvantiMusicNJ Рік тому +2

    Damn getting consistency like this is insane. Doing it on a green screen would be clutch

  • @goll4m
    @goll4m Рік тому +57

    it MIGHT change animation as soon as AI can fix the variation change and keep a very good consistency.

    • @roverdrammen3977
      @roverdrammen3977 Рік тому +1

      Hahahahahahaha!

    • @yoongichi9557
      @yoongichi9557 Рік тому

      Even then it wouldn’t look like traditional animation…. Traditional animation has those deformed looking in-between frames that don’t occur in reality but they add that extra punch and oomf to the resulting animation. An ai would have to be trained on all the various types of contexts for super deformed comedic moments, chibi moments, crazy foreshortening during motion etc

  • @EmanueleDelFio
    @EmanueleDelFio Рік тому

    uhhh this stuff is fire ! thanks Seb !

    • @sebastiankamph
      @sebastiankamph  Рік тому

      I'm expecting some new stuff with this on your Tiktok soon! Don't forget where you saw it first 🔥

  • @jodus
    @jodus Рік тому

    that's really cool

  • @ekmaukaKEJRIWAL
    @ekmaukaKEJRIWAL Рік тому

    This is crazyyy

  • @MAKARUTV
    @MAKARUTV Рік тому

    Thank you. 🎉🎉🎉🎉🎉

  • @creativeleodaily
    @creativeleodaily Рік тому

    This is so cool, The youtube series I am working on, can definitely use this 😮😮😮

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Go for it! Show me the results when you're finished 🌟

  • @dekompose
    @dekompose Рік тому

    Amazing

  • @innovanimations
    @innovanimations Рік тому +2

    I transform videos into cartoon from almost a year ago but this AI has changed it all

  • @thegoldensmith
    @thegoldensmith Рік тому +17

    Amazing stuff! I'm blown away with how fast this tech is progressing. Crazy to think Stable Diffusion has only been available for 6 months. This next year is going to be incredible!

  • @peterxyz3541
    @peterxyz3541 Рік тому +1

    I seen the Corridor Digital attempt at this “anime rock paper scissor”. I’m working on doing something similar

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      Very cool! They got a whole team working, but with the right tools you can get pretty far solo as well.

  • @human_shaped
    @human_shaped Рік тому +3

    We need this as an extension to Stable Diffusion.

  • @user-hg2je1kh6e
    @user-hg2je1kh6e Рік тому +1

    This is amazing, but could you make a tutorial explaining better how to do the edits in Premiere so that we, who are laypeople, can do it too

  • @DanielPartzsch
    @DanielPartzsch Рік тому

    Thanks. I'd be very curious if this also works for driving realistic character images with an input video...? Did you maybe have chance to test this or saw something like this somewhere?

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      I haven't seen any good examples yet. But I'm sure someone will manage it!

  • @evil1knight
    @evil1knight Рік тому +1

    Let’s hope an open source paper comes out for this and it can be added to automatic, I feel like this is very rudimentary

  • @madcolors4013
    @madcolors4013 Рік тому

    What about green screen? so you can generate a background and or the video separately so it won't change as much?

  • @WoWKeila
    @WoWKeila Рік тому

    Hi ! Do you think reducing the video frame rate and asking GEN1 to do something with it could be a viable solution? I mean, you could switch to 10FPS for example, generate your result, and use Flowframes to make it smoother in the end. This would allow you to generate 3 times more images with the same exact seed/background/whatever. Thank you in advance for your response.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Hey, probably yes. I played around a bit with it, trying different settings.

  • @federicogonzalezgalicia3041

    Hi, i get this everytime I try to generate an image from text:
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
    any solutions?
    (I have a imac 64GB RAM btw)
    Thank you very much

  • @scetchmonkey007
    @scetchmonkey007 Рік тому +1

    It's not changing animation unless you can do it from scratch, this is just another version of motion capture.

  • @judgeworks3687
    @judgeworks3687 Рік тому

    I wonder if you ran the gen-1 videos thru with 'Only_foreground: True) if you could render just the figure, then either composite the Ai animated figure with a still of the SD bricks, or do two renders of the gen-1 videos, first run is only foreground and the 2nd group of renders is only background?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      That'd be something! I tried to AI mask the character out and put him in front of a scene. It was kinda meh, but with more work it could be really cool.

    • @judgeworks3687
      @judgeworks3687 Рік тому

      @@sebastiankamph Btw your tuts are great. And you have a very calm delivery style which is refreshing, especially as AI is somewhat frenetic in that it updates almost everyday 🙂 Thanks so much for your generous tutorials and walk thrus.

  • @MonologueMusicals
    @MonologueMusicals Рік тому

    I've searched for the aspect ratio extension and don't see anything. Anyone got a link?

  • @timandersen8030
    @timandersen8030 Рік тому +2

    Where is the open source version of Gen 1 ???

  • @nikhilprasanth829
    @nikhilprasanth829 Рік тому

  • @bernhard.design
    @bernhard.design Рік тому +1

    yeah. Just let us know when an unlimited release goes public. I guess that’s about it. Creative people can start producing their own cool cartoon or anime shows. Thx for sharing, cheers 🍻

  • @niatro
    @niatro Рік тому

    Hi, how can I join the Runway Discord? Is it free?

  • @arwatayeb6386
    @arwatayeb6386 Рік тому

    To add the anime affects, is it free??

  • @mischamim744
    @mischamim744 Рік тому

    I wonder if runway can be pumped into stable diffusion? Is this also a model or? This is already a real animation that anyone can use. Thank you for the video.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      I don't see a way yet, I think their api is closed. But maybe in time.

  • @ireincarnatedasacontentcreator

    may i know the download source of your checkpoint and style pls

  • @michaelli7000
    @michaelli7000 Рік тому

    i find when i try to combine different clips togethr into a long one, there are couple frames missing in each of the gen1 output clip which makes the clips transition not aligned, how do you solve this missing frames problems? thanks a lot.

  • @rayamc607
    @rayamc607 Рік тому

    Great tech but good for some things and not others. It is basically a rotoscoping filter - it changes video but it still looks like video with a filter. Great for the animation tool box or a quick shot like you used. Corridor crew's animation is epic but still required an epic amount of work and man hours to put together... and still looks like rotoscoping.... Would be good to see any animation work you have done apart from playing around with AI - I mean do you understand the topic you are talking about? Not being funny just interested to know :-)

  • @user-nc2hs4rp7l
    @user-nc2hs4rp7l Рік тому

    으메이징!! 뷰티풀 어썸 갓

  • @bigdaveproduction168
    @bigdaveproduction168 Рік тому

    And do you have an idea to keep the same background ?

  • @GameManCZ2000
    @GameManCZ2000 Рік тому

    How do i get private discord

  • @asterlofts1565
    @asterlofts1565 Рік тому

    And this is only Gen 1... and now is out ModelScope!

  • @JohnDoe-cb9ep
    @JohnDoe-cb9ep Рік тому +1

    I hate it when you can't do something locally on your PC, I don't want to register, access, share data, etc.
    That's why Stable Diffusion blows my mind with its features available at any time, without registration and without limitation

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Agreed! But I'll take what I can get with new tech 😊

  • @BenCaesar
    @BenCaesar Рік тому

    Guys this is rotoscoping with a filter.
    Let’s not disrespect animators or rotoscoping.

  • @satishpillaigamedev
    @satishpillaigamedev Рік тому

    each gen using different seed might be a reason for bg change ?

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      I did same seed for this tutorial but I'm sure there are ways around it working with foreground and background separately.

    • @satishpillaigamedev
      @satishpillaigamedev Рік тому

      @@sebastiankamph ya that should be possible with control net and plain bg

  • @ChrisPiciullo
    @ChrisPiciullo Рік тому +1

    This is amazing, considering Corridor Crew just did it by hand the hard way using Stable DIffusion. One weird thing though... Why'd it make the actor "white?" Is there a way to keep it closer to the source material?

    • @sebastiankamph
      @sebastiankamph  Рік тому +3

      With a team their size, they have the luxury to make it really detailed by hand 😅 The change was from Stable diffusions style prompt. Since I wrote martial arts master and had lots of anime prompts, it generally turned towards the region of Asia. I could just as easily have another style (like any of the thumbnail images).

    • @ChrisPiciullo
      @ChrisPiciullo Рік тому

      @@sebastiankamph Oh, totally. That was my point about Corridor. This makes it so that anyone can do essentially what they did which is so amazing. These tools are advancing so far, so fast. Regarding the prompts, I honestly hadn't considered that they would be smart enough to do that without specifics. Maybe its the "old person" in me but I'm continuously amazed by what these tools can do. Thanks for clarifying!

  • @classichungamatv7012
    @classichungamatv7012 Рік тому

    How do you get the access????

  • @11305205219
    @11305205219 Рік тому

    *I use macOS system, How can I get Flow frames*

  • @JohnVanderbeck
    @JohnVanderbeck Рік тому

    Let me know when it is something I can do offline on my own machines.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      You can do what Corridor Crew did on your own machines, but that takes a loooooot more effort. This will have to wait a bit 😊

  • @ishirnmehta1753
    @ishirnmehta1753 Рік тому

    Any idea how to upscale the 512*512 footage to a 4k?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      You could try an AI video upscaler. Check out Topaz for example. It's not free however. Thank you very much for your continued support 🌟🤩

  • @vidalsomohano2414
    @vidalsomohano2414 Рік тому

    Me gustaría en español. Haces maravillas. Intento adaptarme.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      No sé español, espero que UA-cam traduzca los subtítulos, ¿está bien?

  • @DD-nk4fb
    @DD-nk4fb Рік тому

    There's a huge difference between Animation and Rotoscope. Please do understand this fundamental principles.

  • @studio85amsterdam
    @studio85amsterdam Рік тому

    Runway just announced something for 20/03. Likely text to video. 😮😮😮

  • @bilrockstar80
    @bilrockstar80 Рік тому

    my future is at risk

  • @Rscapeextreme447
    @Rscapeextreme447 Рік тому

    Is gen1 free and unlimited?

    • @sasbe1852
      @sasbe1852 Рік тому

      Now its free. They announced it on discord 1,5 hours ago.

  • @oussamael7304
    @oussamael7304 Рік тому

    their discord doesn't work anymore

  • @Zombitopia
    @Zombitopia Рік тому

    Can 4gb ram handle that work?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Yes, Gen-1 handles all the calculations, so no hardware is needed for you individually.

  • @timmygilbert4102
    @timmygilbert4102 Рік тому

    I have the same problem where the ai is turning all my black character white 😂

  • @androidgamerxc
    @androidgamerxc Рік тому

    not only we cant afford it we cant even get to play it

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Soon! I see new testers get invited every day.

  • @KDawg5000
    @KDawg5000 Рік тому

    This technology will obsolete a lot of the work that Corridor Crew did for their anime video.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      For sure! It's still very rough this, but it's getting there.

  • @dingdongchingchong8659
    @dingdongchingchong8659 Рік тому

    Where's the chick?

  • @TheAiConqueror
    @TheAiConqueror Рік тому +1

    🫡

    • @sebastiankamph
      @sebastiankamph  Рік тому

      What is this, a tip for ants!? Just kidding my friend, thank you! Biggest supporter as always 😘🌟

    • @TheAiConqueror
      @TheAiConqueror Рік тому

      @@sebastiankamphMany ants also give a fortune at some point 🐜🐜🐜💰 Then they can carry your sacks full of money 😜👍

  • @new-bp6ix
    @new-bp6ix Рік тому +1

    XD This type of animation has existed since the 1940's
    All people are doing now is pushing companies to produce bad quality animation

  • @alexgold1700
    @alexgold1700 Рік тому +1

    The film "The Little Mermaid" will be run through the neural network and then the film will have a chance!

  • @desertwolfskin2148
    @desertwolfskin2148 Рік тому

    This is essentially just a filter. Not animation.

  • @Framehacker
    @Framehacker Рік тому

    meh, more discord bot sh!t.... guess i have to wait for a stable diffusion version of this.

  • @ilovemagenta
    @ilovemagenta Рік тому

    dude got whitewashed

  • @vincentchambin
    @vincentchambin Рік тому +1

    As you show us very well, the result generated by an AI is really ugly, very far from what a real artist could bring who has spent years acquiring solid skills. Even if you spend hundreds of hours there, it won't change anything. And unfortunately we risk seeing more and more "artists" like you appearing trying to make it easy with these new tools (which can be very useful for certain tasks I'm not against at all, such as coupling chat gtp with a 3d software to make motion design). Only to make beautiful animation, you should already try to be less lazy and get to work to develop your artistic eye. So for now there is still a bright future for REAL artists

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Thank you, I am a designer and animator by trade, working 20 years.

    • @vincentchambin
      @vincentchambin Рік тому

      @@sebastiankamph 🤣

  • @theultimateartist4153
    @theultimateartist4153 Рік тому

    Why is that character face so bloated, I highly doubt anyone wants that

  • @blender_wiki
    @blender_wiki Рік тому

    I love you chanel but sorry to say it that this time this is a poor example. Old info + Low creativity level + poor workflow.

  • @emmasnow29
    @emmasnow29 Рік тому +1

    Having to use discord is a shame.

  • @bikurifacebook4553
    @bikurifacebook4553 Рік тому

    free?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Free, at least for now. (But beta access only for now)

  • @KJIgravitypwns
    @KJIgravitypwns Рік тому

    hey, kinda unrelated but what is the extension at the top of your txt2img tab that allows quicking switching of vaes and loras?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      It's just a setting in settings. Come ask in Discord for details.

  • @user-hg2je1kh6e
    @user-hg2je1kh6e Рік тому

    This is amazing, but could you make a tutorial explaining better how to do the edits in Premiere so that we, who are laypeople, can do it too