3 FREE Ways to Style Videos with Comfy UI in 2024!

Поділитися
Вставка
  • Опубліковано 15 лис 2024

КОМЕНТАРІ • 16

  • @mhfx
    @mhfx  3 дні тому +1

    Diffutoon (speed) vs FastBlend (Balance) vs AnimateDiff (Detail) -- which style has the most potential to you?

  • @ian2593
    @ian2593 День тому +1

    Nice process. How many frames max can you convert with this process? Limited to 32 or 48 frames? Thanks.

    • @mhfx
      @mhfx  День тому +1

      It depends on your hardware. I think my box crashed on a 13 second clip, but I was able to do up to 7 seconds without any issues. In-between 7 and 13 it depended on resolution size and settings

  • @sudabadri7051
    @sudabadri7051 2 дні тому +1

    Great work

    • @mhfx
      @mhfx  2 дні тому +1

      Really appreciate that, thank you 🙏✌️

  • @ian2593
    @ian2593 21 годину тому

    Last question. When you were using the smooth node (2nd last example). On my system the first video combine was sharp but the smoothed one was very blurry. Is there a setting to sharpen it up a bit (or is that just what happens to low rez videos)?

    • @mhfx
      @mhfx  18 годин тому +1

      No problem. This is the main balance you need to find with the smooth video node. It could be your input is too low res, it could be that there's too much of a difference between your ksampler output and your input image, so it appears blurry. It could also just be you need to tweak the settings in the smooth video node. Something with high motion for example will need tweaks to the example settings. I think those are good places to start testing first tho

  • @ian2593
    @ian2593 23 години тому

    where do you get the specific idapter files from and where do you place them? Thanks.

    • @mhfx
      @mhfx  21 годину тому

      these are just the standard ipadapter models, nothing special. You can install ipadapter from the manager. The github page also has a whole tutorial specifically on how to use it from latent vision -- check it out here: github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file

  • @CHATHK
    @CHATHK 2 дні тому +1

    Whats your vram and how long it took ?

    • @mhfx
      @mhfx  2 дні тому

      I have an rtx4090, diffutoon takes a few min approx 1-2, fast blend takes about 4-5 and animate different takes around 10-20min

  • @tanyasubaBg
    @tanyasubaBg 2 дні тому

    thank you for the video and all the information but I had an issue at the start. it showed me that
    a custom node for DiffSynth-Studio
    (IMPORT FAILED). what I have to do with it. thank you in advance

    • @mhfx
      @mhfx  2 дні тому

      Ah -- it should not be an issue, just continue as normal. I'm not 100% sure why it gives this notice, but it is updating one (possibly more) nodes from the other install. Perhaps there's no actual import just a code update 🤷‍♂️ Either way it will not affect your workflow

    • @tanyasubaBg
      @tanyasubaBg 2 дні тому

      @@mhfx thank you. can you tell me where I can find that control net. is this named controlnet11Models_tileE.safetensors right version.

    • @tanyasubaBg
      @tanyasubaBg 2 дні тому

      @@mhfx Thank you. Could you help me find the right ControlNet model? I used this one from Hugging Face: ControlNet-v1-1, but I keep having the same issue. It tells me to load the model in ComfyUI_windows_portable\ComfyUI\models\controlnet\, even though the file is already there.

    • @mhfx
      @mhfx  2 дні тому

      So there's a few controlnets. Is this for the diffutoon workkflow? You'll need the tile and line art controlnets. Make sure you grab the pth files. If you're downloading from hugging face it should be these huggingface.co/lllyasviel/ControlNet-v1-1/tree/main