ComfyUI Video to Video Animation with Animatediff LCM Lora & LCM Sampler

Поділитися
Вставка
  • Опубліковано 3 лют 2025
  • Learn how to apply the AnimateLCM Lora process, along with a video-to-video technique using the LCM Sampler in comfyUI, to quickly and efficiently create visually pleasing animations from videos.
    -----------------------------------------------------------------------------------------------------------------------------------------
    Useful Videos:
    Animatediff LCM Lora in ComfyUI for Superior Results: • Animatediff LCM Lora i...
    Mastering Video to Video in ComfyUI (Without Node Skills): • Mastering Video to Vid...
    (How to Use Detailer) for Better Animation: • Animatediff Comfyui Tu...
    ------------------------------------------------------------------------------------------------------------------------------------------
    Workflow Download + Prompt Styles: goshnii.gumroa...
    Best Music & SFX for Creators: bit.ly/3TdAqIA (get 2 extra months free)
    Final Video: • Animatediff Lcm Lora a...
    -----------------------------------------------------------------------------------------------------------------------------------------
    Animate LCM Page: github.com/dez...
    Animate LCM Models: huggingface.co...
    AnimateDiff Evolved: github.com/Kos...
    RPG Artist Tools Checkpoint Model: civitai.com/mo...
    HelloYoung25d Checkpoint Model: civitai.com/mo...
    Lora Neon: civitai.com/mo...
    Lexica Art: lexica.art/
    ---------------------------------------------------------------------------------------------------------------------------------------------
    Animatediff LCM Tutorials: • AnimateDiff LCM Tutorials
    #stablediffusion #comfyui #animatediff #controlnet #videotovideo #lcm

КОМЕНТАРІ • 60

  • @information4society
    @information4society 10 місяців тому +1

    Thanks so much for the video. You have been a huge help as I transition from A1111 to ComfyUI. keep up the great work and I hope your channel blows up

    • @goshniiAI
      @goshniiAI  10 місяців тому

      Your words of support are really meaningful. I'm glad the videos helped you make the switch to ComfyUI. Let's keep growing together!

    • @information4society
      @information4society 10 місяців тому

      definately. I've been making music videos with Deforum + PARSEQ but the LCM with AnimattedDIF was so much quicker. I've been looking at how to upscale the vids; been watching a guy Stephan Taul, but I don't have the foundation to understand and follow him yet. You do a great job of not losing a creator at my level of understanding. thanks@@goshniiAI

    • @goshniiAI
      @goshniiAI  10 місяців тому +1

      @@information4society I'm glad the videos are helpful for creators at all levels of understanding, and I appreciate you taking the time to share your experience. It's very encouraging.

  • @lordmo3416
    @lordmo3416 2 місяці тому

    Thanks for saving me hours of experimentation... you're most kind

    • @goshniiAI
      @goshniiAI  2 місяці тому

      You are most welcome, Lord. Thank you for your feedback.

  • @NERDDISCO
    @NERDDISCO 10 місяців тому

    I'm so much looking forward to try this out once I have some time! Thank you very much!!!

    • @goshniiAI
      @goshniiAI  10 місяців тому +1

      You are most welcome. Happy creating, and thank you for your support!

  • @M--S
    @M--S 3 місяці тому

    Very good explanation! Thank you!

    • @goshniiAI
      @goshniiAI  3 місяці тому

      Your are welcome, I appreciate your feedback.

    • @M--S
      @M--S 3 місяці тому

      @@goshniiAI By the way, I found out how to make it easier for me to follow your explanations: cut the speed down to 50%! 😉 Then you sound like having drunk half a bottle of whisky (you really must try it - I mean listening to the speed reduction, not the whisky), but the thus reduced speed of your thoughts matches with my ability of digesting it. 🙃😊🙏

    • @goshniiAI
      @goshniiAI  3 місяці тому

      ​@@M--S LOL! Good one! I am glad you are sharing your experience of having fun while also learning. :)

  • @bonsai-effect
    @bonsai-effect 10 місяців тому

    Great tutorial Goshnii .. can't wait to try.

    • @goshniiAI
      @goshniiAI  10 місяців тому

      Have fun creating! thank you for your lovely feedback

  • @swoodc
    @swoodc 10 місяців тому

    your last video was great thankyou for workflow help since I dont know what im doing. i just started watching this vid so hopefully its fire too

    • @goshniiAI
      @goshniiAI  10 місяців тому

      I appreciate your feedback. It's encouraging to hear the workflow was useful.

  • @codestuff2821
    @codestuff2821 3 місяці тому

    Concise demonstration

    • @goshniiAI
      @goshniiAI  3 місяці тому

      Thank you for your encouraging feedback!

  • @voteps
    @voteps 3 місяці тому

    I've run into issue:
    VHS_LoadVideo
    cannot allocate array memory
    is there some limits for how long the video can be? or what quality it is?

    • @goshniiAI
      @goshniiAI  3 місяці тому

      You may be running out of memory; check that the original video matches the batch fame you are also using in Comfyui.
      You can also lower the resolution of the frame size to save some memory, then use a video upscaler.

    • @voteps
      @voteps 3 місяці тому

      @@goshniiAI the batch frame in the latent noise? i didnt notice it need to match, the batch is the number of frames in the video?

    • @goshniiAI
      @goshniiAI  3 місяці тому

      @@voteps That's great! I'm glad to read that everything is going well.

  • @kattarsisss
    @kattarsisss 10 місяців тому

    Thant you very much for your videos!!)))

    • @goshniiAI
      @goshniiAI  10 місяців тому

      I appreciate hearing from you, and you are very welcome.

  • @ValleStutz
    @ValleStutz 10 місяців тому

    Works very well! Thank you! Any method to get rid of the flicker/morphing?

    • @goshniiAI
      @goshniiAI  10 місяців тому

      I hope you had some exciting results. Thank you for your feedback. I'll need to research flickers before deciding on a topic for future videos.

  • @gualguitv
    @gualguitv 3 місяці тому

    I ran this workflow on an 8gb GPU and it took hours to run, is that normal, Do you have a workflow to do this for lower Vram, Thanks

    • @goshniiAI
      @goshniiAI  3 місяці тому

      Hello there, Thank you for giving the workflow a try! On an 8GB GPU, it can take a while depending on the complexity of the scene and the settings you're using. However, you can try reducing the resolution to speed things up then later use an upscaler to refine the details.

  • @Nankatsu09
    @Nankatsu09 3 місяці тому

    Thank you! Is there a way to keep Video Consistency?

    • @goshniiAI
      @goshniiAI  3 місяці тому +1

      To keep things consistent, guidance with ControlNet Canny and including Optical Flow, can help align details between frames.
      You can also achieve this by increasing the strength of the Contolnet model.

    • @Nankatsu09
      @Nankatsu09 3 місяці тому

      @@goshniiAI Thank you very much mate!! Gonna try it out!

  • @SanderBos_art
    @SanderBos_art 9 місяців тому

    great tutorial :) I was wondering though what parameter is influencing how close the output still resembles the original video? Is it the cfg?

    • @goshniiAI
      @goshniiAI  9 місяців тому

      Yes, that is accurate, however, the CFG for LCM is recommended to be between 1 and 2. As a suggestion, you can continue experimenting to see the results.
      Also, the prompt had an impact on the original video's style.

  • @sudabadri7051
    @sudabadri7051 10 місяців тому

    Awesome thanks mate

    • @goshniiAI
      @goshniiAI  10 місяців тому

      i'm glad i could assist and grateful for your feedback.

  • @omarzaghloul6169
    @omarzaghloul6169 8 місяців тому

    nice... it's much lighter & faster ... it works perfectly how can I make details more constant & less changing randomly for example character's hair color & clothes keep changing?

    • @goshniiAI
      @goshniiAI  8 місяців тому

      I'm pleased to hear it's working well for you and that it seems lighter and faster!
      You could try these few suggestions by playing with Lora weights or using a fixed seed for each frame.

    • @omarzaghloul6169
      @omarzaghloul6169 8 місяців тому

      @@goshniiAI Thank you for the prompt reply
      ... in your workflow, you are using multiple loRas... which one I should play with weights?

    • @goshniiAI
      @goshniiAI  8 місяців тому

      @@omarzaghloul6169 The Loras I used were chosen to fit the animation theme, which may not work in your instance, looking for good character Loras may be helpful in your case.

  • @90boiler
    @90boiler 5 місяців тому

    I don't understand, my result is only depth map video whatever I try. Can you post a screenshot of you final work so I could see models you use?

    • @goshniiAI
      @goshniiAI  5 місяців тому

      Hi there, sorry to read that, however the workflow can be downloaded for free using the link provided in the description.

    • @90boiler
      @90boiler 3 місяці тому

      @@goshniiAI `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.? I run it Macbook M1 Pro

    • @goshniiAI
      @goshniiAI  3 місяці тому

      @@90boiler For now, this setup can still produce good outcomes, although it is not as optimised as it is on NVIDIA GPUs. Some users have achieved success by reducing batch sizes or simplifying node setups to reduce system load. i hope we see better workflows soon for CPU'S

  • @SejalDatta-l9u
    @SejalDatta-l9u 7 місяців тому

    Great video!
    A few quick questions.
    1. Can you show an instance of image to video using the lcm method? Image of a person, copying the movement of a video. Think depose etc.
    2. How would you treat a situation where you have a person in a video clip, but when translated to dwpose, some of the movement is cut off screen?
    3. Do you have a lcm video that you've upscale to keep the quality and fix and deformed faces?
    You've earned a loyal subscriber my friend!

    • @goshniiAI
      @goshniiAI  7 місяців тому

      Hello there, and Thank you for your support!
      i believe the use of controlnet can help with question 1
      When dealing with movements cut off by the screen in DWPose, ensure your subject is fully in frame throughout the video. Cropping or resizing the clip might help.
      Upscaling LCM Videos and Fixing Deformed Faces
      you can include a Hires resfix to the workflow or Tools like Topaz Video AI can help upscale and refine the details of your animation.

  • @phi1s0n
    @phi1s0n 10 місяців тому

    is lcm animatediff possible with sdxl models?

    • @goshniiAI
      @goshniiAI  10 місяців тому +1

      Unfortunately, this workflow is not compatible with SDXL models. I am researching and hope to share the process of using SDXL Models. I'd love that as well.

  • @linashu6381
    @linashu6381 10 місяців тому

    Thank you very much for your sharing.
    I met a problem "DepthAnythingPreprocessor" red, I use the Manager “Install Missing Custom nodes ” node,
    but, Display error "File "D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main\__init__.py", line 1, in < module>
    from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
    ModuleNotFoundError: No module named 'inference_core_nodes'
    Cannot import D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main module for custom nodes: No module named 'inference_core_nodes' ", Excuse me, how can I solve this problem👧

    • @goshniiAI
      @goshniiAI  10 місяців тому

      Hello there, I tried to follow your path, but I don't have - (ComfyUI-Inference-Core-Nodes-main) - in my custom nodes installation folder. you can make sure that all the necessary files and dependencies are properly installed and located in the specified directories since our setups may be different.

  • @Meh_21
    @Meh_21 6 місяців тому

    Hi! You workflow uses only controlnet2 group, lineart is bypassed. :)

    • @goshniiAI
      @goshniiAI  6 місяців тому

      Hello there. You are correct. :)
      The workflow has two possible processors for control net, if the sampler custom node gives no problems, you can use both.
      However, you do not have to choose to use only one; you are free to use any of the processors or switch between them entirely depending on what you require.
      thank you for the observation.

    • @Meh_21
      @Meh_21 6 місяців тому

      @@goshniiAI thanks to you, great workflow.

    • @goshniiAI
      @goshniiAI  6 місяців тому

      @@Meh_21 You are welcome! I'm grateful

  • @викторВиктор-ы5ж
    @викторВиктор-ы5ж 10 місяців тому

    Greetings from Russia. I loaded the video for 3 seconds, and the video harvester shows 1 second of video. How can I increase the time from 1 second to 3 seconds? I'm writing Google translation!!!

    • @goshniiAI
      @goshniiAI  10 місяців тому

      hello there, Glad to hear from you from Russia.
      1.Increase Frame load Cap (Load Video Node) drive.google.com/file/d/1hIm53FFZW6xW2qmY7jESERqvAEy04Dta/view?usp=sharing
      2.Increase Batch Size (Empty Latent image) drive.google.com/file/d/1tuqv9CsdtmjvN1IojzwJKSzwZZ_ckY3E/view?usp=sharing
      this numbers needs to match for the desired duration.

  • @MisterCozyMelodies
    @MisterCozyMelodies 10 місяців тому

    that`s awesome, could you tell me what your CPU, GPU, RAM ?

    • @goshniiAI
      @goshniiAI  10 місяців тому +1

      Thanks for the compliment! My setup includes an Intel Core i7 processor, an NVIDIA GeForce RTX 3060 GPU, and I have 32GB of RAM. tinyurl.com/mtwjn4bp

    • @MisterCozyMelodies
      @MisterCozyMelodies 8 місяців тому

      @@goshniiAI thanks

  • @cgstone30
    @cgstone30 10 місяців тому

    Fire drop

    • @goshniiAI
      @goshniiAI  10 місяців тому

      blazing feedback! Thanks a lot.