Ai Animation With ComfyUI

Поділитися
Вставка
  • Опубліковано 9 вер 2024

КОМЕНТАРІ • 87

  • @davidmouser596
    @davidmouser596 2 місяці тому +1

    You: I was not going to get into ComfyUI!
    ComfyUI: JOIN US

  • @boulimermoz9111
    @boulimermoz9111 9 місяців тому +3

    INCREDIBLE, I'm working hard on this kind of animation and you save me LOOOOT of time, thanks

  • @jacekb4057
    @jacekb4057 9 місяців тому +2

    Amazing! I subscribed your newsletter. Thanks for great content

  • @manimartinez1232
    @manimartinez1232 9 місяців тому +2

    Love what you do and the fact you share it.
    Thanks a lot!

  • @wholeness
    @wholeness 9 місяців тому +1

    My man Sabastian for the win!

  • @LoneBagels
    @LoneBagels 6 місяців тому +1

    Holy cow! This looks amazing! 😮

  • @TeddyLeppard
    @TeddyLeppard 9 місяців тому +2

    Should be possible to re-render and clean up entire movies using techniques similar to this in the very near future. Just create super high-resolution facsimiles of everything in the frame and then re-render it. No more individual frame cleanup.

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому +1

      I think the important part of what you said is near future. At the speed we’re seeing the advancements it’s quite likely to be next year,
      But yes I agree it’s going to change our process completely. Wondering how long til Blender integrates stable diffusion as an actual render engine.

    • @tetsuooshima832
      @tetsuooshima832 8 місяців тому +1

      Why would you lose time on super high resolution if you're gonna use AI to re-render it anyway, I don't see the point. Do you know how time-consuming 3D CGI can be ? x)

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      I wouldn’t call HD and lower ‘super high resolution’ plus when I’m exporting from blender I drop the specs enough where it’s about to churn out frames in a fraction of the time it normally would. If it was taking minutes or more per frame then I wouldn’t bother. But 10-20 seconds I can live with.

  • @USBEN.
    @USBEN. 9 місяців тому +1

    Impressive workflow.

  • @timpruitt7270
    @timpruitt7270 9 місяців тому +6

    Maybe I missed it. Whats the link for the workflow?

    • @MickySarge
      @MickySarge 9 місяців тому +1

      stick around till the end

  • @aminshallwani9369
    @aminshallwani9369 9 місяців тому +1

    Amazing Thanks for sharing

  • @hashir
    @hashir 8 місяців тому +1

    in comfy you dont have to put the lora in the prompt. it's just all done and controlled in the node itself.

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      Thank you! I was waiting for someone to let me know lol. Only took a week. Much appreciated 😁

  • @imtaha964
    @imtaha964 9 місяців тому +1

    you save my life bro
    love u so much😍😍🥰🥰

    • @imtaha964
      @imtaha964 9 місяців тому

      plis make that video.

  • @SapiensVirtus
    @SapiensVirtus 3 місяці тому

    hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance

  • @Ekopop
    @Ekopop 9 місяців тому +1

    got yourself a new subscriber, keep up the juicy content its awesome !
    what is your spec ? im afraid my 16gb wont be enough, I'm already struggling going over 15 steps of denoising, but I see you are using 12 with a good result.

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      Thank you 😊.
      That’s because I’m using the LCM Lora and sampler so I can go as low as 4-6 samples wi try great results. I go more into detail about using LCM in my previous two videos. Definitely worth playing with if you have lower vram. Also try lower resolutions and frame rates(interpolated) followed by upscale after the fact.

    • @Ekopop
      @Ekopop 9 місяців тому

      @@sebastiantorresvfx I absolutely will, and finger crossed my computer won't blow up ahah

    • @Ekopop
      @Ekopop 9 місяців тому

      @@sebastiantorresvfx upscales results are ... meeh

  • @Dave102693
    @Dave102693 9 місяців тому +2

    I’m wondering when pika and runway will use blender and unreal engine to make their videos a lot more believable?

    • @HistoryIsAbsurd
      @HistoryIsAbsurd 8 місяців тому

      Realistically it will probably go the other way around. Unreal and Blender will start their own video generating

  • @AIartIsrael
    @AIartIsrael 9 місяців тому +2

    can you please give the link to download the animatediff controlnet model in your PDF. both the open pose and this one is the same file

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      Thanks for the heads up, check the link in your email again, I've updated the file.

    • @AIartIsrael
      @AIartIsrael 9 місяців тому +1

      @@sebastiantorresvfx thank you 🙏

  • @Hysteriamute
    @Hysteriamute 9 місяців тому +1

    Hey thanks very interesting! Where can I find more info about the Audrey Hepburn comfyui open pose clip @0:21 please?

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      you want to see a tutorial about what everyone’s calling the Netflix shot? 😆

  • @themightyflog
    @themightyflog 8 місяців тому +1

    I wonder if you use Human Generator + Metatailor for clothing options

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      I haven’t tried metatailor yet is it like marvellous designer?

  • @the_one_and_carpool
    @the_one_and_carpool 9 місяців тому +1

    mine does none of the prompt and comes out blurry where the vae the controlnet model all i found on you thing was controlnet checkpoint no controlnet_gif

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      If you download the animatediff controlnet specified in the downloads use that in the place of the controlGIF.

  • @eddiej.l.christian6754
    @eddiej.l.christian6754 9 місяців тому +1

    Hmm Advanced Clip Text Encode and Derfuu ComfyUI ModdedNodes refuse to install using the Comfy UI Manager.

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      If they’re giving you trouble installing that way you can install them the same way you installed the manager. Google them and clone their GitHub repositories into the custom nodes folder then restart ComfyUI.

  • @fusedsf
    @fusedsf 5 місяців тому

    Hey cant seem to find the workflow after joining the newsletter and clicking downloads. Or is it in the LCM animations PDF companion?

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 місяців тому

      Hi Rob, that’s correct get the PDF, it’ll have the link to the workflow and links to any other models I used in this video. 🙂

  • @noobplayer-jc9hy
    @noobplayer-jc9hy 8 місяців тому +2

    Can i do anime style with it??❤❤

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      Certainly; I’ll show you how in the next video.

    • @noobplayer-jc9hy
      @noobplayer-jc9hy 8 місяців тому +1

      ​@@sebastiantorresvfxwhen are we getting it dear??

  • @FudduSawal
    @FudduSawal 8 місяців тому +1

    Thats incredible, subscribed you. 🌟🌟,
    Where i can download the workflow?

  • @alishkaBey
    @alishkaBey 8 місяців тому +1

    great! I would like to see that with IPAdapters :D

  • @sandeepm809
    @sandeepm809 2 місяці тому

    sdxl version ??

  • @noobplayer-jc9hy
    @noobplayer-jc9hy 8 місяців тому +2

    How to do anime style ??

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому +1

      I’m working on that actually. Stay tuned I’ll come out with something soon.

    • @noobplayer-jc9hy
      @noobplayer-jc9hy 8 місяців тому

      @@sebastiantorresvfx Thank you so much ,shall be waiting ❤️❤️❤️❤️

  • @jank54
    @jank54 9 місяців тому +1

    ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 768]' is invalid for input of size 1310720 ... 4 Models are too much for my 4070 ti

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому +1

      Try lower resolutions and lower frame rates and see how you go.

    • @jank54
      @jank54 9 місяців тому +1

      @sebastiantorresvfx Thank you, I kept going -> The lower resolution and less frames perfom much faster! It worked!

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      Excellent! Glad to hear to hear you kept it up, resolution doesn’t matter, the upscalers that we have and those coming out soon with make it a thing of the past.

  • @ssj3mohan
    @ssj3mohan 8 місяців тому

    What Kind of PC do you have ? and how long it took to render this Video usin AI [ Full process ] Ty so much

  • @lei.1.6
    @lei.1.6 9 місяців тому +1

    Hey !
    I get the following error when running the workflow with the Controlnet enabled, no error when they are disabled but yeah... no controlnet then :
    COMFYUI Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
    Any idea ?
    Thank you for the great tutorial !

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      Haven’t seen that before, did you run the NVidia or the CPU comfyUI? And also do you have an Nvidia GPU?

    • @lei.1.6
      @lei.1.6 9 місяців тому

      I'm running the gpu comfy UI. RTX 4090 / 7950x @@sebastiantorresvfx

  • @stevietee3878
    @stevietee3878 9 місяців тому +1

    Excellent work ! I've just subscribed to your newsletter. Have you tried using the Stable Video Diffusion (SVD) model yet, do you know if controlnet can be used with the SVD model in Comfyui for more control and consistency ?

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому +1

      Not yet unfortunately, I have played around with it but until we get some controlnets or something similar for it, it’s kind of a shot in the dark with every generation.

    • @stevietee3878
      @stevietee3878 9 місяців тому +1

      @@sebastiantorresvfx yes, that's what I'm finding, I've been experimenting with the settings for a couple of weeks but it is just trial and error at the moment. I'm sure more motion and camera control will arrive soon.

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому +1

      Once it does, I’m just picturing doc brown saying, we’re gonna see some serious $hit! 🤣

  • @mysticacreations3188
    @mysticacreations3188 6 місяців тому

    Workflow link?

  • @dkamhaji
    @dkamhaji 9 місяців тому +1

    hi Can you share a link to that controlgif controlnet? havent use that one yet.
    thanks! Was that something you renamed? is it the TILE controlnet?

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      Search for crishhh/animatediff_controlnet in huggingface.

    • @alexandrelouvenaz
      @alexandrelouvenaz 9 місяців тому

      Hello did you find the controlGiF.ckpt ? I'm not sur to have the good one.

  • @ALFTHADRADDAD
    @ALFTHADRADDAD 8 місяців тому +1

    Fuck yeah

  • @CHARIOTangler
    @CHARIOTangler 9 місяців тому +1

    Where's the link to the workflow?

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      Link is in the description

    • @JonRowlison
      @JonRowlison 9 місяців тому

      @@sebastiantorresvfx Sort of... the link to how you can sign up for the NEWSLETTER that probably has the link to the workflow is in the description... the link to the workflow isn't in the description. :)

  • @victorhlucas
    @victorhlucas 9 місяців тому +1

    very impressive stuff. I'd like to subscribe but my anti-virus app says your website is compromised :(

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      Weird, haven’t heard that before. No stress I got you. check out the new link in the description.

    • @victorhlucas
      @victorhlucas 9 місяців тому +1

      @@sebastiantorresvfx thanks, new link worked fine. Who knows maybe anti-virus was being over-conservative

  • @the_one_and_carpool
    @the_one_and_carpool 9 місяців тому

    can i combine this with the comfy ui warpfusion work flow

  • @MrDebranjandutta
    @MrDebranjandutta 9 місяців тому

    never received newsletter and the json to this, sad

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 місяців тому

      It says the email has been sent already, check your spam folder 📁.

  • @guruware8612
    @guruware8612 8 місяців тому

    maybe nice to play with it, but,...
    why not doing such simple animations just in blender, or any other dcc-app ?
    this will be useless in a real job, there are customers and art-directors who want exactly what they will pay for, not some random generated something.

  • @ALFTHADRADDAD
    @ALFTHADRADDAD 8 місяців тому

    My ComfyUI keeps tapping out, even at 768x432 Resolution. I've about 12GB of VRAM. The steps are at 8 and starting step is at 4. Basically is telling me it's out of memory unfortunately. Any ideas?

    • @sebastiantorresvfx
      @sebastiantorresvfx  8 місяців тому

      How much Ram does your pc have?

    • @ALFTHADRADDAD
      @ALFTHADRADDAD 8 місяців тому

      @@sebastiantorresvfx I think its ~63GB

    • @ALFTHADRADDAD
      @ALFTHADRADDAD 8 місяців тому +1

      Hey I figured it out; basically I just reduced that frames I was going for. Did a much smaller set of 25 frames, at 768x432. I'll be experimenting further but thanks for your great work@@sebastiantorresvfx

  • @imCybearPunk
    @imCybearPunk 8 місяців тому

    Que buen video, como te encuentro en insta o discord?

  • @HO-cj3ut
    @HO-cj3ut 8 місяців тому

    AnimateDiff veya Deforum ? for a1111 , thanks