Ai Animation in Stable Diffusion

Поділитися
Вставка
  • Опубліковано 25 лис 2023
  • The Method I use to get consistent animated characters with stable diffusion. BYO video and it's good to go!
    Want to advance your ai Animation skills? Checkout my Patreon: / sebastiantorresvfx
    For the companion PDF with all the links and comfyUI workflow.
    www.sebastiantorresvfx.com/dow...
    Add LCM to Automatic 1111
    github.com/light-and-ray/sd-w...
    You're awesome! Thanks for hanging out with me!

КОМЕНТАРІ • 159

  • @UtopiaTimes
    @UtopiaTimes 2 місяці тому +2

    For the first time in 6 decades, we see exactly what we want to achieve in 3D cartoon animation. We are watching closely and learning. We thank you for sharing

  • @themightyflog
    @themightyflog 8 місяців тому +3

    I like how you talked about "occlusion" I think. It is like making a comic book page with bleed on it. Nice to know we have to have bleed on it.

  • @TheAgeofAI_film
    @TheAgeofAI_film 8 місяців тому +1

    Thanks for the tutorial, I have subscribed, this is really useful for our AI film development

  • @USBEN.
    @USBEN. 8 місяців тому

    We getting there, consistency of new Stable Video model is way better than any competition.

  • @pogiman
    @pogiman 8 місяців тому +1

    amazing workflow thank you for sharing

  • @user-jl4ps7qw4p
    @user-jl4ps7qw4p 8 місяців тому +2

    Your animations have great style! Thanks for sharing your know-how.

  • @vendacious
    @vendacious 6 місяців тому

    You say "Finally after a year" we have animation, but that's not fair to Deforum, which has been around for nearly a year now. Anyways, the way you solved the helmet problem was super smart and shows a deep understanding of the reasons the face screws up when half of it is occluded. This also works when using Roop and other face-swap tools (which fail if both eyes and mouth are not showing in a frame), as well as in Deforum and AnimateDiff.

  • @Onur.Koeroglu
    @Onur.Koeroglu 8 місяців тому +1

    Yeeeesss... Thank you for sharing this Tutorial 💪🏻🤗😎

  • @vegacosphoto
    @vegacosphoto 8 місяців тому +1

    Thabks for rhe tutorial, never used those Control net inits before, been trying with Canny and OpenPose. This has been very useful. Any idea of how can we deflicker the animation without Davinci? Either something free of cheap. Thabks in advance.

  • @dreamzdziner8484
    @dreamzdziner8484 8 місяців тому +8

    So beautiful! From day one I am more interested in SD animations than image generations. As I have tried many experiments I can honestly say this one looks super clean. There is flickering and the expressions tend to vary from the original vid but still it looks great. Thanks for the tutorial my friend.👍

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +4

      Yes… let’s blame the expressions on the ai and not my lack of experience with facial animation 😂 I’m cool with us going that route lmao.
      That was literally my biggest concern lol. I’m like holy crap my 3D animation skills are ridged. She’s barely moving her eyes.
      Thank you for the feedback though I appreciate it 😁

    • @dreamzdziner8484
      @dreamzdziner8484 8 місяців тому

      @@sebastiantorresvfx 😁I will always be blaming the AI coz I understand the pain we take to get the exact expressions and still the AI simply ignores whatever prompt or settings we feed into. We will definitely get beyond that soon.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      In saying that you got me thinking perhaps I should integrate some face tracking software into the pipeline so I don’t have to hand animate it like I did this time around. Could possibly add some life to the expressions. I definitely need to train my own model with more examples of expressions though.

  • @digitalbase9396
    @digitalbase9396 6 місяців тому +1

    Awesome images, what a great method.

  • @scratched11
    @scratched11 7 місяців тому

    Thanks for the workflow. What model did you use to get the outline shading on Tom Cruise and the Matrix?

  • @coloryvr
    @coloryvr 8 місяців тому +1

    Oh wow! It's very impressive how SD continues to develop!
    BIG FAT FANX for that video!

  • @MisterWealth
    @MisterWealth 7 місяців тому +1

    I cannot get these types of results at all on mine, but I use the same exact settings and lora as well. It just make sit look like my face has a weird filter on it. It won't make my guy cartoony at all

  • @leosmi1
    @leosmi1 8 місяців тому +1

    Is getting wild

  • @OmriDaxia
    @OmriDaxia 3 місяці тому

    how come when I start doing batch processing after getting a single image right it looks completely different? I'm using all the same settings and same seed, just adding the input and output directories and I'm getting a completely different looking result. (It's consistently different too. The single image one is always in a blue room and the batch ones are always in a forest for some reason.)

  • @steventapia_motiondesigner
    @steventapia_motiondesigner 8 місяців тому +1

    Man! So cool. Thanks for the breakdown o your workflow. I’m going to try this LCM Lora I keep hearing about. Also. I usually get blurry images when I set the denoisibg multiplier to 0. Am I doing something wrong?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      The noise multiplier shouldn’t be giving you blurred images, could be something else under the hood that’s causing that.

    • @steventapia_motiondesigner
      @steventapia_motiondesigner 8 місяців тому

      Thanks for the reply!@@sebastiantorresvfx

  • @shitpost_xxx
    @shitpost_xxx 8 місяців тому

    nice! can you make tutorial using cascadeur to stable diffusion?

  • @blnk__studio
    @blnk__studio 8 місяців тому +1

    awesome!

  • @user-xy9bg3gq9v
    @user-xy9bg3gq9v 8 місяців тому +1

    good job bro 🤟❤‍🔥

  • @AI_mazing01
    @AI_mazing01 8 місяців тому +2

    I get an error, when trying to change those .py files, also there might be an error in the instruction (Add this code at line 5, it should take up lines 5 to 18 once pasted.) when i paste this code i get more lines, 5-19

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Sorry to hear that, check the new link in the description, I linked it to the original reddit post where I got the code from. Hope it works for you.

  • @yiyun8336
    @yiyun8336 8 місяців тому +1

    Awesome results! how come putting the denoising strength to 1 kept the same image ? i've been trying to follow what you did, but having the denoising strength at 1 give me a totally different images, not sure if i missed something

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Sorry to hear that, make sure to check the command prompt to see if your controlnet is activating properly. Had a similar issue.

    • @yiyun8336
      @yiyun8336 8 місяців тому

      @@sebastiantorresvfxactually the problem seem to be the checkpoint model i was using .. it seem like the controlnet didn't work with it, i downloaded the same model as you and now it works just fine! any idea why it doesn't work properly with the other checkpoint ? does some checkpoint models simply don't work with controlnet ?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Do you mind me asking what checkpoint that was? Could be that it just wasn’t trained with enough examples. But it’s hard to say without having a play with the ckpt it self.

  • @aleksandrasignatavicius6772
    @aleksandrasignatavicius6772 8 місяців тому +1

    great job

  • @Skitskl33
    @Skitskl33 8 місяців тому +3

    Reminds me of the movie A Scanner Darkly, which used interpolated rotoscoping.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Loved that movie; had to rewatch it recently because I’d forgotten the whole storyline 😂

    • @lordmeep
      @lordmeep 8 місяців тому

      came here to say just that!

  • @ToonstoryTV-vs6vf
    @ToonstoryTV-vs6vf 8 місяців тому

    Very nice, but how can I get this page on the web, whether on computer or phone, because I am new to that

  • @morpheusnotes
    @morpheusnotes 8 місяців тому +7

    this is really amazing!!! How did you even come up with this? I guess, now we'll see a lot of animated vids. Thanks for sharing

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +2

      Technically the methods been around for a while, but most are forcing SD to create something out of nothing and so it struggles to keep a consistent image from frame to frame.

  • @colehiggins111
    @colehiggins111 8 місяців тому +4

    love this, would love to see a tutorial about how you input the video and batch rendered the whole thing to match the style you created.

  • @ProzacgodAI
    @ProzacgodAI 8 місяців тому +1

    I've never published anything, but I got some decent temporal stability, in a lower resolution, with control net and char turner + inpainting style
    Especially for your helmet scene, with all of the various blender cutouts...
    you generate your character face, then the second frame would be on the left the whole previously generated frame, on the right, the frame you need to generate now.
    using inpainting masks I focused on that right side, using the previous frame, or a master frame for the left side control.
    and sometimes using controlnet, sometimes without, but char+turner worked a treet.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Interesting, I have used char turner in the past but didn’t think to integrate that with this. Thanks for the tip that’s awesome.

    • @ProzacgodAI
      @ProzacgodAI 8 місяців тому

      ​@@sebastiantorresvfx thanks, I used chatgpt to create the python script that created the images, it's fairly basic/simple tool. I hope it can help you and someone can actually push this idea to it's end, I've become exhausted with the amount of effort some of this stuff takes and I'm kinda just happy to see someone else walk with it.

  • @lithium534
    @lithium534 8 місяців тому +2

    You mentioned that you would share the model as it's not on civitai.
    I can't find any link.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      Yep it’s in the companion PDF, you’ll find the link to that in the description. The PDF has all the links you’ll need

  • @DanielMPaes
    @DanielMPaes 8 місяців тому +1

    I'm happy to know that one day I'll be able to make a remake of the Shield Hero anime.

  • @ekke7995
    @ekke7995 8 місяців тому +5

    It's amazing.
    How possible is it to use SDXL together with a trained model over cheap greenscreen footage?
    I want to create a cartoon style video with absolutely the minimum time effort and money.
    You know that dream we all have...😂
    🎉 amazing work!

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +3

      100% possible. Make sure to download the green screen Lora’s from civitai so your green screen doesn’t get washed out in the process.
      The video with the guy jumping over the car was green screen but a 3D character. So it’s completely doable with a cheap green screen too, actually I cut out the video of me turned into a cartoon with a green screen behind me last minute 😂
      Thank you 😊

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      I just looked into it the SDXL models might not be as responsive as the 1.5 ones when it comes to the Controlnet settings. Hope this changes because I’ve been training everything in SDXL lately.

  • @NirdeshakRao
    @NirdeshakRao 8 місяців тому +1

    Brilliant 🤗🥳

  • @cam6996
    @cam6996 8 місяців тому +1

    AMAZING!
    did you make all the intro clips?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Thank you.
      They’re movie scenes I ran through the pipeline to show what it’s capable of.

  • @twilightfilms9436
    @twilightfilms9436 8 місяців тому +1

    The noise o moire in the hair and eyes is because of a bug in the 1.4x controlnet. I’ve been struggling with that ever since the 1.6 A1111 was released…..nice video!

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Controlnet has been driving me nuts since that update. But it definitely works better than SDXL controlnet for the time being. Which sucks because if I can bust out images like that with 1.5, the SDXL alternative would be so much better.

  • @armondtanz
    @armondtanz 8 місяців тому +1

    This looks awesome. Where do u learn about loras & vae's . I heard them get mentioned but have no clue?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Vae models are used to fix problems generated by the main checkpoint you’re using. Each checkpoint has a specific VAE you should use. In some cases they’re baked into the checkpoint so you don’t need to use it. Some also help improve the colors of the final generation. You can add it to your interface by going to settings, interface, quick settings list. Type in SD_Vae. Apply and restart the ui.
      Loras are smaller models trained on one so more specific subjects or styles. You apply them to your prompt to activate them. You just download them to your models/Lora folder. Usually from a site like civitai.

    • @armondtanz
      @armondtanz 8 місяців тому +1

      @@sebastiantorresvfx wow, thanks, hope u can do a noobs guide with all these quick insights, think it would be great with having to sift thru a lot of information.
      Most tutorials just say 'add this, add that but dont say why'...
      Once again thanks

  • @THEJABTHEJAB
    @THEJABTHEJAB 8 місяців тому +3

    Looking good.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      😳 Thank You! I’ve been following your work for months!

    • @THEJABTHEJAB
      @THEJABTHEJAB 8 місяців тому +1

      @@sebastiantorresvfx This has lots of potential and a totally different approach. Keep up the good work.
      Can't wait to see what people do with Video Diffusion too when they start tinkering.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      I agree totally, we’ve only started threading the needle on this behemoth.
      I downloaded the SVD model and had to refrain from using it while working on this video lol. Far too tempting. That’s this weeks experiment. The only thing that has me hesitating is the limitations on number of frames and resolution. This currently method has no limit on frame number and unlike animate diff I can pull the plug at any moment since I’m seeing full size frames as they finish.

    • @THEJABTHEJAB
      @THEJABTHEJAB 8 місяців тому +1

      @@sebastiantorresvfx I'm exactly the same, have to work on a paid job so I can't even touch VideoDiff or I will rabbithole it and get lost

  • @rapzombiesent
    @rapzombiesent 8 місяців тому

    How can i find the link to EthernalDark safetensor

  • @binyaminbass
    @binyaminbass 6 місяців тому +2

    Can you show us how to do this in ComfyUI? I decided to learn that instead of A1111 since it seems faster and more flexible. But I'm still a noob at it.

  • @rockstarstudiosmm11
    @rockstarstudiosmm11 4 місяці тому

    Hi, Which seed and prompt u used for Thor scene please Respond

    • @sebastiantorresvfx
      @sebastiantorresvfx  4 місяці тому

      Not sure how much this will help but...
      Positive: white man, blonde, red face paint, blue metal, red cape, flat colors, simple colors,
      Negative prompt: blurred, photograph, deformed, glitch, noisy, realistic, stock photo,
      Steps: 6, Sampler: Euler a, CFG scale: 1, Seed: 27846563
      Have fun :)

  • @AlexWinkler
    @AlexWinkler 8 місяців тому +1

    Wow this is next level art

  • @evokeaiemotion
    @evokeaiemotion 6 місяців тому +1

    so do u have to have davinci to do this or what? Its not really clear from teh vid

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 місяців тому

      You can use any video editing software you like. I just use Davinci because if you’re just starting out there’s a free version that’s pretty much a fully working editor.

  • @rilentles7134
    @rilentles7134 8 місяців тому +2

    I van not find diff_control_sd15temporainet_fp16 for the control net, why is that?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Go to my newsletter and you’ll get a PDF file that goes with this video. It’ll have the link to the temporal net controlnet file.

  • @LawsOnJoystick
    @LawsOnJoystick 8 місяців тому +1

    are you using a series of images though stable diffusion than piecing then back together later?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      That’s right, automatic 1111 doesn’t have the availability to do it so I put the images together using a video editor like resolve

    • @LawsOnJoystick
      @LawsOnJoystick 8 місяців тому +1

      nice! thanks for the info :)
      @@sebastiantorresvfx

  • @BassmeantProductions
    @BassmeantProductions 8 місяців тому

    Sooooo close to what I need

  • @SirChucklenutsTM
    @SirChucklenutsTM 8 місяців тому +1

    Hoo boy... whens the first AInime coming out

  • @Stick3x
    @Stick3x 8 місяців тому +1

    I am not seeing the green screen Loras on Civit.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      Don’t use the civitai search, it doesn’t work right. Google this exactly green screen Lora model. The civitai one should be the very first response.

  • @davidcraciunexplorator
    @davidcraciunexplorator 7 місяців тому +1

    Sorry if a silly question but I am new Stable Diffusion. How to access the UI interface you are using in this video? Is it only possible through a local install?

    • @sebastiantorresvfx
      @sebastiantorresvfx  7 місяців тому

      Not silly at all, if you go into my videos you’ll find one that’s how to install automatic 1111 locally.

  • @LucidFirAI
    @LucidFirAI 7 місяців тому

    I think animatediff makes less flickery results? I'm also up against a brick wall with a model I want to use refusing to play ball and remain high quality. Some models work well with animatediff, and some models are ruined or at least quality reduced by it :/ I know not what to do.

    • @sebastiantorresvfx
      @sebastiantorresvfx  7 місяців тому

      Check out the animatediff video I made using comfyUI. Much better results, without the flickering.

  • @art3112
    @art3112 8 місяців тому +2

    Very good tutorial. Thanks. More tutorials on A1111 and video/animation are most welcome. My only slight criticism is some of it felt a bit rushed to me. A little more, and slower, explanation might help in parts. I will check back on your channel though as very helpful. Keep up the great work!

  • @omegablast2002
    @omegablast2002 8 місяців тому +2

    you didnt tell us where to get eternal dark

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      The link to it is in my newsletter in the description

  • @plush_unicorn
    @plush_unicorn 6 місяців тому +1

    Cool!

  • @jonjoni518
    @jonjoni518 7 місяців тому +1

    thanks for the work🤩🤩🤩, I HAVE DOUBTS, IN CONTROLNET YOU USE diff_control_sd15_temporalnet_fp16.safetensors, but in your PDF but when you click on the controlnet model in your link it downloads the diffusion_pytorch_model.fp16.safetensors. my question is which model to use, the diff_control_sd15_temporalnet_fp16.safetensors or the diffusion_pytorch_model.fp16.safetensors.

    • @sebastiantorresvfx
      @sebastiantorresvfx  7 місяців тому +1

      Actually you’ll find that a lot of the special made controlnet models outside of the originals are called diffusion pytorch model fp16. Not sure why they’ve done that but you’ll need to rename it to what ever the actual controlnet it is. Otherwise you’ll quickly end up with
      diffusion PyTorch model fp16(1)
      diffusion PyTorch model fp16(2)
      diffusion PyTorch model fp16(3)
      At which point you’ll have a lot of fun trying to distinguish which is which 😂

    • @jonjoni518
      @jonjoni518 7 місяців тому

      thanks a lot, now I see it haha, I love your workflows, I will share mine when I finish with these tests, greetings from Spain.​​​​​​​​​​​​​​​​​​​​​​​​​@@sebastiantorresvfx

  • @Sinistar74
    @Sinistar74 8 місяців тому +1

    I don't see join an email list anywhere on your website.

  • @Dalin_B
    @Dalin_B 7 місяців тому +1

    Does anyone else not have a clue what he's using, where he got it from?

    • @sebastiantorresvfx
      @sebastiantorresvfx  7 місяців тому

      I have no idea what he’s using. Probably something skynet related.

  • @themightyflog
    @themightyflog 8 місяців тому +1

    I don't see where and how to do the LCM install. I think you left a few things out.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Link for that is in the description. Copy the link and install it as an extension.

  • @tiffloo5457
    @tiffloo5457 8 місяців тому +1

    niceee

  • @sownheard
    @sownheard 8 місяців тому +4

    model link?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      Check out my website for the pdf version with all the model links.

  • @amkkart
    @amkkart 8 місяців тому

    and where to find the VAE??? you do nice videos but you should provide all the links so that we can follow your steps

  • @themightyflog
    @themightyflog 7 місяців тому +2

    I tried the tutorial but just wasn't as consistent as yours. Hmmmmm.....too much flickr.

    • @MisterWealth
      @MisterWealth 7 місяців тому

      Same. Same models and checkpoints too. The image doesn't come out nearly as clearly

    • @sebastiantorresvfx
      @sebastiantorresvfx  7 місяців тому

      This is partially why I moved onto the ComfyUI method and using 3D assets to drive the animation. The animation length using A1111 was somewhat limiting and the flickering was heavily dependent upon the source video used. It would drastically influence the amount of flickering.
      Where as in ComfyUI I’m getting some flicker but it’s very minimal. The last video I put out I go into it. But I’m putting out another one soon and I’ll touch base on it again there.

  • @hurricanepirate
    @hurricanepirate 8 місяців тому +1

    where's the ethernaldarkmix_goldenmax model?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      I have all the links from this video in the pdf companion in the description

  • @rockstarstudiosmm11
    @rockstarstudiosmm11 4 місяці тому

    I am not getting exact results mine is different art style

  • @michail_777
    @michail_777 8 місяців тому +3

    Hi, I'm also trying to do animation. You've done well. But it's a simple coloring. Honestly, no big changes. If I had a video card with the ability to generate 1920 by 1440, I would try SVD with the input video. It does make a difference.
    Good luck with the generation.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Yeah it’s a shame SVD is capped at 1024px and 25 frames. If those two things weren’t in its way it would be a real game changer.

    • @michail_777
      @michail_777 8 місяців тому +1

      @@sebastiantorresvfx I've seen these values, but it generates 60 frames.

    • @Galactu5
      @Galactu5 8 місяців тому +3

      There is no pleasing people these days. Do something cool and people complain it doesn't look like a miracle.

    • @kanall103
      @kanall103 8 місяців тому +1

      show us yours "simple coloring"

  • @rilentles7134
    @rilentles7134 8 місяців тому

    I also dont know how to install LCM

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Link in the description is an extension for automatic 1111, go to extensions, install from url, paste the link from that GitHub repository, once that finishes, go to installed extensions, apply and restart. You’ll see the new sampler in the samplers drop list. 😀

  • @typingcat
    @typingcat 8 місяців тому

    Is it? I just tested a famous online A.I. image to video site, and the results were terrible. For example, I uploaded a still cut of a Japanese animation where a boy and a girl were on a slope. I generated two videos and in both videos weird things happen, like their front truned into back. It was unusable.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      What site did you use?

    • @typingcat
      @typingcat 8 місяців тому +1

      @@sebastiantorresvfx Runway.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Weird I thought runway had fixed those kinds of glitches. I’ve only used it a few times, visually it’s really impressive but reminds me of the vine 6 second videos 😂. People just stitching together randomness. But what ever works for them I suppose.

    • @typingcat
      @typingcat 8 місяців тому +1

      @@sebastiantorresvfx Ah, and if you think this is because I used an animation picture, I had first tried with a real human picture. An actress's upper torso. When I tried panning the picture up, her face melted down as if some acid was poured on her head in a horror movie. After trying these, I thought "Damn, this technology is not ready." and gave it up.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Yeah even stable diffusion struggles to recognise animated faces with face detect sometimes. Might be why that happened.

  • @kanall103
    @kanall103 8 місяців тому +1

    LMS test to LMS karras? best tutorial...no xiti talking

  • @commanderdante3185
    @commanderdante3185 8 місяців тому +1

    wait wait wait. you're telling me you could have your character face forward and generate textures for their face?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Forward, side ways, what ever. As long as those angles are trained into the model.

  • @mhitomorales4497
    @mhitomorales4497 8 місяців тому +2

    As an animator. I don't know if this is scary or an advantage.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      I suppose everyone will take it how ever they want. As a VFX artist I’m seeing this as a huge advantage going forward. There’s some things cooking at the moment that will let me scale the kinds of projects I make going forward. Sure there are downsides, like when cgi was introduced for Jurassic Park and stop motion animators went out of business, this is just another advancement in the industry.

  • @angloland4539
    @angloland4539 8 місяців тому +1

  • @stableArtAI
    @stableArtAI 6 місяців тому

    So basically. SD does not do animation still, you used other apps to render a video animation. If followed correctly that was blender. And if understanding you are just mapping texture over an animated character you created outside and render outside of SD. Which if following, It still doesn't do animation SD.

    • @sebastiantorresvfx
      @sebastiantorresvfx  6 місяців тому

      If you’re after a one click button that’ll make your animations without any external effort, I’m afraid you’ll be waiting a long while 😉

    • @stableArtAI
      @stableArtAI 6 місяців тому

      Neophyte to SD but @@sebastiantorresvfx here is our first animated character featuring SD for the base. ua-cam.com/video/QjxoY_opAGc/v-deo.html

  • @azee6591
    @azee6591 8 місяців тому +3

    Prediction: As convoluted as this process seems now, in the next 60-90 days stable diffusion will have text description to animation as regular LLM models, no different than image LLM models today

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +2

      Doesn’t sound convoluted at all to be honest. This space is moving so quickly I don’t know we have any idea how far it’ll go in 90 days let alone in the next year.

  • @razorshark2146
    @razorshark2146 8 місяців тому +11

    AI feels more like simply morphing one image into another, then actually pushing the feeling of motion that professional artist put into their work/art. AI creates perfect volume per drawing and then tries to cover it up using some kind of wiggly lines to make it feel a bit more handdrawn or something . The outcome is better then what most badly done content looks like, but it will never fully replace properly done animation by artists who actually have put in the effort of mastering the required skills. It will always be a tool that will steal from these artists to generate something that gets close but not quite there yet, for years now... At least this particular craft seems safe from being taken over. It will just end up being another style/preference of animation, when using untrained eyes it looks amazing. : )

    • @AIWarper
      @AIWarper 8 місяців тому +1

      It’s been a year lol you have to think on a larger time scale
      A year ago we couldn’t even generate an image of a face….

    • @razorshark2146
      @razorshark2146 8 місяців тому

      @@AIWarper I am, its just that they will always face the issue of having a unstable nature of training processes to work with. Its what that whole framework that was designed in 2014 is built upon.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +5

      At this point it’s pointless to assume we know where this technology with be in six months let alone a year.
      As for it been a tool only for stealing from artists, that comes down to the individual user of the tool. Because if an artist considered the potential this could mean if they trained a model on their work for their use. They could exponentially increase their productivity. If we only ever see it as a criminal tool, we won’t look at the positives it could have for the artists. Instead we’re training artists to fear technology.

    • @sluggy6074
      @sluggy6074 8 місяців тому +1

      Animators on life support.
      We're gonna need some more copium. Stat.

    • @razorshark2146
      @razorshark2146 7 місяців тому +1

      ​@@sluggy6074 lol, what i have seen AI squirt out so far is : yeah that looks really cute. Whenever they finished programming an actual Soul for AI to put into its generated art we can talk again about AI being a replacement for artists lol

  • @santitabnavascues8673
    @santitabnavascues8673 8 місяців тому

    I prefer traditional 3D animation, it's evolved through time long enough to provide cohesive animations from a frame to the next through simulation. I mean... the point of this is reinventing over complete results. Feels redundant. Curious experiment though

  • @spacekitt.n
    @spacekitt.n 8 місяців тому +3

    still looks janky but oh so close.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Exactly why I put this video out. Gotta get to the point where there’s no jank left 😁 can’t do that alone.

    • @spacekitt.n
      @spacekitt.n 8 місяців тому +1

      @@sebastiantorresvfx i really appreciate the work the people who are doing this are putting in. once perfected it will be world changing

  • @AtomkeySinclair
    @AtomkeySinclair 8 місяців тому +2

    Eh - it looks like rotoscoping. Think Heavy Metal scenes, Wizards, and Sleeping Beauty. It's too real for fake and too fake for real.

    • @primeryai
      @primeryai 8 місяців тому +1

      "too real for fake" 🤔

  • @its4anime
    @its4anime 8 місяців тому

    I need to upgrade my rtx 2070. Generating with that high pixel took only minutes 😭

  • @ailearningskill
    @ailearningskill 7 місяців тому +1

    amazing workflow thank you for sharing

  • @thankspastme
    @thankspastme 8 місяців тому +1

    awesome!