IOJ
IOJ
  • 56
  • 29 253
Is X-Portrait2 really that good? How does it compare with LivePortrait & others?
This is a comparison between the recently released X-Portrait2 and other Face animation tools like LivePortrait.
The code and models for X-Portrait2 have not yet been released and I plan to implement a comfyUI node when they are released, if no other person does so in time.
Переглядів: 254

Відео

Easily create 512x512 Driving Video for LivePortrait (Face Animation) in ComfyUI
Переглядів 1,1 тис.21 день тому
This is a brief overview of a new comfyUI node called 'Video for LivePortrait' which can be used to create a driving video that can be used in livePortrait for best results from a source video. Github Repository: github.com/Isi-dev/ComfyUI-Animation_Nodes_and_Workflows
Easily Create Video Compilations in ComfyUI (videos to video)
Переглядів 1,4 тис.21 день тому
This video introduces a new ComfyUI custom node called 'Join Videos' and shows how it can facilitate the compilation of videos to form a new video. Github repository: github.com/Isi-dev/ComfyUI-Animation_Nodes_and_Workflows
ComfyUI Inspyrenet Rembg Assistant node
Переглядів 578Місяць тому
ComfyUI Inspyrenet Rembg Assistant node
Quickly Create Paintings & Cartoons from Images & Videos in ComfyUI
Переглядів 3002 місяці тому
Quickly Create Paintings & Cartoons from Images & Videos in ComfyUI
New ComfyUI UniAnimate Nodes for Image to Video and Image to Image Pose Transfer
Переглядів 5 тис.2 місяці тому
New ComfyUI UniAnimate Nodes for Image to Video and Image to Image Pose Transfer
Comparing current image to sketch or line art nodes in ComfyUI
Переглядів 3292 місяці тому
Comparing current image to sketch or line art nodes in ComfyUI
Introducing ComfyUI Image to Drawing Assistants
Переглядів 6652 місяці тому
Introducing ComfyUI Image to Drawing Assistants
Installation Guide for ComfyUI UniAnimate Nodes
Переглядів 1,4 тис.3 місяці тому
Installation Guide for ComfyUI UniAnimate Nodes
Basic ComfyUI workflow for UniAnimate (An image animation ai project)
Переглядів 3,4 тис.3 місяці тому
Basic ComfyUI workflow for UniAnimate (An image animation ai project)
From dusk to dawn (stable diffusion animation) #ai #blender #stablediffusion #film #animation
Переглядів 428 місяців тому
From dusk to dawn (stable diffusion animation) #ai #blender #stablediffusion #film #animation

КОМЕНТАРІ

  • @sudabadri7051
    @sudabadri7051 26 хвилин тому

    Very cool

  • @ChikadorangFrog
    @ChikadorangFrog Годину тому

    liveportrait is dead once this is released

  • @CyberMonk_36999
    @CyberMonk_36999 17 днів тому

    cool!

    • @Isi-uT
      @Isi-uT 17 днів тому

      Thank you.

  • @tanishqchahal9166
    @tanishqchahal9166 18 днів тому

    heyy, got this error while reposing an image "Failed to init class <class 'ComfyUI-UniAnimate-W.tools.modules.autoencoder.AutoencoderKL'>, with /usr/local/lib/python3.10/dist-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZNK3c105Error4whatEv". PLease help!!

    • @Isi-uT
      @Isi-uT 18 днів тому

      It seems the torch & torchaudio in your comfyui environment are not compatible. You can check the versions of both libraries and confirm if they are compatible by visiting this pytorch site: pytorch.org/get-started/previous-versions/ Also note that this project was tested successfully with pytorch versions 2.0.1 & 2.3.1 with compatible torchvision and torchaudio libraries. I don't know if other versions work well.

  • @stoutimonstimulations
    @stoutimonstimulations 23 дні тому

    how do I get these nodes?

    • @Isi-uT
      @Isi-uT 22 дні тому

      You can get them from this github repository: github.com/Isi-dev/ComfyUI-Animation_Nodes_and_Workflows

  • @alfredfarr275
    @alfredfarr275 Місяць тому

    You did it correctly

    • @Isi-uT
      @Isi-uT Місяць тому

      Thanks!

  • @IcebergLokaz
    @IcebergLokaz Місяць тому

    Try it without the bg

    • @Isi-uT
      @Isi-uT Місяць тому

      I did, but it looked better with the bg.

  • @animatedstoriesandpoems
    @animatedstoriesandpoems Місяць тому

    👍

    • @Isi-uT
      @Isi-uT Місяць тому

      Thank you.

    • @animatedstoriesandpoems
      @animatedstoriesandpoems Місяць тому

      @@Isi-uT Buddy....nothing appear at video combine node..... any suggestion ??

    • @Isi-uT
      @Isi-uT Місяць тому

      You can check your CLI for any error.

  • @LoopnMix
    @LoopnMix Місяць тому

    Awesome tutorial got both up and running Thanks!

    • @Isi-uT
      @Isi-uT Місяць тому

      Thank you.

  • @toptipss99
    @toptipss99 Місяць тому

    When running I get this error: use_libuv was requested but PyTorch was built without libuv support. I hope the author can help me figure out how to fix it. Thank you

    • @Isi-uT
      @Isi-uT Місяць тому

      I thought I had updated the code to handle this error. Did you install the nodes within the last 2days?

    • @Isi-uT
      @Isi-uT Місяць тому

      If you did not install the nodes within the last 2days, you can try updating the nodes.

  • @OverWheelsRJ
    @OverWheelsRJ Місяць тому

    Hi! What is the shortcut to "search" to add nodes?

    • @Isi-uT
      @Isi-uT Місяць тому

      UniAnimate Nodes for ComfyUI

  • @petEdit-h9l
    @petEdit-h9l Місяць тому

    Can it do more moves, like bending, turning to the back ?

    • @Isi-uT
      @Isi-uT Місяць тому

      Yes it can. I have done a 360 before,which was great for guys, but long-haired ladies tend to lose part of their back hair. As for bending, it's great for male characters, but it doesn't handle boobs well. I will upload my tests soon.

  • @maikelkat1726
    @maikelkat1726 Місяць тому

    he there, nice and simpel setuip , but the img2lineart assistant does not work? ik hangs all the time without any output? the other nodes run fine and fast.... what could be the issue? nog logging info is available ....hope to hear..

    • @maikelkat1726
      @maikelkat1726 Місяць тому

      somehow it works when creating new workflow and not using ther one provided...both video and image..and fast! great

    • @Isi-uT
      @Isi-uT Місяць тому

      Good to know it now works. The img2lineart makes more computational demands than the other nodes when the deep_clean_up option is greater than zero. But setting it to zero results in lots of noise in the output. The alternative is to reduce the value of the details which, depending on the input image, could make most of the lines disappear.

  • @petEdit-h9l
    @petEdit-h9l Місяць тому

    Hi, is this better than mimic motion

    • @Isi-uT
      @Isi-uT Місяць тому

      I really can't say if it is better in terms of output quality because I haven't tested mimic motion. I actually came across the paper on mimic motion before hearing of unianimate, but the VRAM requirements of mimic motion put me off. I don't know if any improvement has been made in that area since then. Based on what I have read, unianimate can animate images faster and can handle far longer video frames than mimic motion.

  • @nothing228
    @nothing228 Місяць тому

    guys if somone needed to make Magna how with compyui what workflow he should use ?to capture the characters in separate?

    • @Isi-uT
      @Isi-uT Місяць тому

      I suggest you watch the following video by Mickmumpitz since I haven't done much related to your questions. He shared some workflows which might help you get started: ua-cam.com/video/mEn3CYU7s_A/v-deo.html

  • @벤치마킹-f1z
    @벤치마킹-f1z Місяць тому

    hi. Where is the workflow?

    • @Isi-uT
      @Isi-uT Місяць тому

      You can get the workflows from this github repository: github.com/Isi-dev/ComfyUI-UniAnimate-W You will find two json files at the root of the repository: UniAnimateImg2Vid.json & uniAnimateReposeImg.json You can also find the workflows for the new nodes in the newWorkflows folder.

  • @Chrisreaction-bz8po
    @Chrisreaction-bz8po 2 місяці тому

    Hi i am getting VAE' object has no attribute 'vae_dtype' error can you help me where to download or what need to do for this

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Try updating ComfyUI.

    • @Chrisreaction-bz8po
      @Chrisreaction-bz8po Місяць тому

      @@Isi-uT Hi sir do u have any mail ID to contact

    • @Isi-uT
      @Isi-uT Місяць тому

      @@Chrisreaction-bz8po isinsemail@gmail.com

  • @Chrisreaction-bz8po
    @Chrisreaction-bz8po 2 місяці тому

    Hi i am getting VAE' object has no attribute 'vae_dtype' error can you help me where to download or what need to do for this

    • @Isi-uT
      @Isi-uT 2 місяці тому

      You can identify the node that throws the error and try updating it. If that doesn't work, you can update Comfyui.

  • @DuCaffeine
    @DuCaffeine 2 місяці тому

    My friend, is Comfyui running on the cloud the same as running on my local PC, meaning it has the same features such as uploading any model, using any model, adding, removing, modifying, creating models, and many more features? Is this true??

    • @Isi-uT
      @Isi-uT 2 місяці тому

      I have not yet used comfyui on the cloud, but based on what I have read & heard, it is generally true that you can do the things you mentioned. You can do as you like with models in colab since it uses your Google drive for storage, although I don't think you can run comfyui on colab with a free account. Other cloud services like comfy.icu & runComfy.com provide lots of models, workflows, and technical assistance that appear to make it easier to run comfyui than doing so locally. I don't know if there's provision to upload and modify models with these other platforms.

  • @darshannilankar586
    @darshannilankar586 2 місяці тому

    Can you provide me setting for rendering 20 sec video in HD quality

    • @Isi-uT
      @Isi-uT 2 місяці тому

      You should be able to render a video up to 20 sec if you have a high VRAM. The highest I have done is 4 sec. Someone mentioned rendering up to 370 frames which is a little above 12 sec for a 30fps video. The video quality depends on the inputs and the seed. The team behind the original project suggested using a seed of 7 or 11 in their project page. You have to keep experimenting with different seeds, and upscaling vids and images to find out what works best.

    • @darshannilankar586
      @darshannilankar586 2 місяці тому

      Thanks It was helpful. I'll try and response you when I get desire result

  • @ParthKakarwar-b7j
    @ParthKakarwar-b7j 2 місяці тому

    Can i use a different checkpoint like dreamshaper SD1.5 in place of v2-1_512-ema-pruned.ckpt ,can this help with Low VRAM i have 4 gb vram.

    • @Isi-uT
      @Isi-uT 2 місяці тому

      I really don't know because I haven't tried it. All I did was create a comfyui wrapper for the original project, so I don't know much about the unified model.

  • @ParthKakarwar-b7j
    @ParthKakarwar-b7j 2 місяці тому

    I installed it, poses are detected and aligned BUT i have only 4 gb vram therefore it shows Out of memory error.Is there setting which i can change in config file,at least reposer 1 frame should work.

  • @uljanafil
    @uljanafil 2 місяці тому

    @Isi-uT , i have this error(((( UniAnimateImageLong Unknown error

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Please can you post the full error message let's see if we can resolve it.

  • @buike9306
    @buike9306 2 місяці тому

    "Can I add a webcam node instead of load a video node?

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Interesting! I never considered that. I think you should be able to use a Webcam node, although I'm not familiar with any.

    • @buike9306
      @buike9306 2 місяці тому

      @@Isi-uT I would give a try to see if would work

  • @ParthKakarwar-b7j
    @ParthKakarwar-b7j 2 місяці тому

    when i installed the custom node,IT MESSED UP MY PYTORCH VERSION.... Can you plz help me how to get it working on my existing comfy UI ,This is my system now, Total VRAM 4096 MB, total RAM 23903 MB pytorch version: 2.3.1+cu121 xformers version: 0.0.27

    • @Isi-uT
      @Isi-uT 2 місяці тому

      The xformers requirement makes the installation quite difficult and it took me sometime to get it working. Can you check for any error in your CLI?

    • @ParthKakarwar-b7j
      @ParthKakarwar-b7j 2 місяці тому

      @@Isi-uT Actually when i installed unianimate Custom node it reindstalled Pytorch to other version, and my comfy ui was not working,then i deleted the Unianimate Custom node and i reinsatlled the pytorch version: 2.3.1+cu121 ,then comfy started working BUT now i am Not sure to install the custom node again as i fear it will mess up my pytorch again,can you help plz..Thanks for replying.

    • @uljanafil
      @uljanafil 2 місяці тому

      @@ParthKakarwar-b7j it`s my problem too

    • @Isi-uT
      @Isi-uT 2 місяці тому

      The pytorch version in the requirements.txt file in the Unianimate custom node is 2.3.1 which is the same as what you currently have, so I am quite surprised that it would install another pytorch version. The only other thing is to ensure that the xformers version is 0.0.27.

    • @Isi-uT
      @Isi-uT 2 місяці тому

      An alternative is to have another comfyUI installed for unianimate to avoid dependency conflicts with other custom nodes. That's what I usually do for new custom nodes with requirements that conflict with the ones I already have.

  • @ParthKakarwar-b7j
    @ParthKakarwar-b7j 2 місяці тому

    when i installed the custom node,IT MESSED UP MY PYTORCH VERSION.... Can you plz help me how to get it working on my existing comfy UI ,This is my system now, Total VRAM 4096 MB, total RAM 23903 MB pytorch version: 2.3.1+cu121 xformers version: 0.0.27

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Your setup looks okay except the VRAM. I have only tested it with 12GB VRAM and sometimes my system struggles depending on the input.

    • @ParthKakarwar-b7j
      @ParthKakarwar-b7j 2 місяці тому

      @@Isi-uT Can't i use Reposer atleast,with 4gb vram...? and when i install Unianimaate custom node it reinsatll the pytorch to new and my comfy UI dont work can you help me with this.Thanks for your reply.

    • @Isi-uT
      @Isi-uT 2 місяці тому

      The repose workflow might work. Considering your high RAM, I guess your shared VRAM is quite high. If setting up the node is interfering with the normal operation of your current Comfyui, then I suggest you install a separate Comfyui for unianimate so that you can focus on troubleshooting a specific installation. Sometimes I spend a whole day trying to make a workflow that worked in comfyui a month ago work again. The dependency conflicts among custom nodes and those introduced by updates is a serious issue.

  • @hawa11sfinest
    @hawa11sfinest 2 місяці тому

    God is good keep seeking the truth with an open heart and you will find it in Jesus

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Thank you. I will keep seeking.

  • @Isi-uT
    @Isi-uT 2 місяці тому

    Please join the challenge let's learn from each other, and create a better world for all.

  • @DuCaffeine
    @DuCaffeine 2 місяці тому

    Give me the name of the form or workflow please.

    • @Isi-uT
      @Isi-uT 2 місяці тому

      The new workflows are : image2VidLong.json & reposeImgNew.json You can find both workflows in the newWorkflows folder in this github repository: github.com/Isi-dev/ComfyUI-UniAnimate-W

  • @DuCaffeine
    @DuCaffeine 2 місяці тому

    Brother, can it be run on Google Colab?

    • @Isi-uT
      @Isi-uT 2 місяці тому

      There's an implementation by someone else on Google Colab, although I haven't tried it. You can see this github repository: github.com/camenduru/UniAnimate-jupyter

  • @williamlocke6811
    @williamlocke6811 2 місяці тому

    "The size of tensor a (64) must match the size of tensor b (96) at non-singleton dimension 4" So at first I was getting the above error but after some tinkering I found that if I set resolution_x to 768 the video will then render. But at 512 I get the above error. Is that something you can easily fix with an update to your node? Or maybe there is something I can do? The problem now is that when I was using your older node, I could get about 120 frames at 512. This was too short for my project. I become very excited to see you make a node that could render longer videos. But now at 768, that takes up a lot more VRAM so I can only get 100 frames (23.4GB of VRAM). Can't go much higher without OOM errors. So, at least for me, your new LONG node is producing shorter videos than your older node ;) I really hope there is an easy fix :) Anything you can think of to reduce the amount of VRAM needed would be VERY helpful.

    • @Isi-uT
      @Isi-uT 2 місяці тому

      I haven't come across this error. Please can you show the full error let me know the part of the code that threw it, and also the sizes of the picture and video so that I can try to reproduce the error.

    • @williamlocke6811
      @williamlocke6811 2 місяці тому

      @@Isi-uT Sure. I've used a variety of resolution videos as the input. All portrait orientation. I started with 720 X 1080 and I resized it down to 660x512 to see if that would help, but it didn't. I also tried different sized images, so that's not it. Here's what the CLI says... 2024-09-14 19:33:07,823 - dw pose extraction - INFO - All frames have been processed. 32 Ready for inference. Running UniAnimate inference on gpu (0) Loaded ViT-H-14 model config. Loading pretrained ViT-H-14 weights (X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\checkpoints/open_clip_pytorch_model.bin). X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\functional.py:5504: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen ative\transformers\cuda\sdp_utils.cpp:455.) attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal) Restored from X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\checkpoints/v2-1_512-ema-pruned.ckpt Load model from (X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\checkpoints/unianimate_16f_32f_non_ema_223000.pth) with status (<All keys matched successfully>) Avoiding DistributedDataParallel to reduce memory usage Seed: 30 end_frame is (32) Number of frames to denoise: 32 0%| | 0/25 [00:00<?, ?it/s] !!! Exception during processing !!! The size of tensor a (64) must match the size of tensor b (96) at non-singleton dimension 4 Traceback (most recent call last): File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\uniAnimate_Inference.py", line 128, in process frames = inference_unianimate_long_entrance(seed, steps, useFirstFrame, image, refPose, pose_sequence, frame_interval, context_size, context_stride, context_overlap, max_frames, resolution, cfg_update=cfg_update.cfg_dict) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\inferences\inference_unianimate_long_entrance.py", line 76, in inference_unianimate_long_entrance return worker(0, seed, steps, useFirstFrame, reference_image, refPose, pose_sequence, frame_interval, context_size, context_stride, context_overlap, max_frames, resolution, cfg, cfg_update) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\inferences\inference_unianimate_long_entrance.py", line 467, in worker video_data = diffusion.ddim_sample_loop( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\modules\diffusions\diffusion_ddim.py", line 825, in ddim_sample_loop xt, _ = self.ddim_sample(xt, t, model, model_kwargs, clamp, percentile, condition_fn, guide_scale, ddim_timesteps, eta, context_size=context_size, context_stride=context_stride, context_overlap=context_overlap, context_batch_size=context_batch_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\modules\diffusions\diffusion_ddim.py", line 787, in ddim_sample _, _, _, x0 = self.p_mean_variance(xt, t, model, model_kwargs, clamp, percentile, guide_scale, context_size, context_stride, context_overlap, context_batch_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\modules\diffusions\diffusion_ddim.py", line 719, in p_mean_variance y_out = model(latent_model_input, self._scale_timesteps(t).repeat(bs_context), **model_kwargs_new[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\modules\unet\unet_unianimate.py", line 522, in forward concat = concat + misc_dropout(dwpose) ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~ RuntimeError: The size of tensor a (64) must match the size of tensor b (96) at non-singleton dimension 4 Prompt executed in 41.08 seconds

    • @Isi-uT
      @Isi-uT 2 місяці тому

      I see, thanks. I will look into it.

    • @Isi-uT
      @Isi-uT 2 місяці тому

      The UNET model was getting the default resolution from the config file which could sometimes be different from the resolution used by the noise. I have updated the code to prevent the error. All you need to do is to add: cfg.resolution = resolution at line 240 in tools/inferences/inference_unianimate_long_entrance.py at line 234 in tools/inferences/inference_unianimate_entrance.py Please let me know how it goes. As for the VRAM requirement, I can't think of any other way to reduce it. The inference initially required at least 22GB of VRAM to run, but was reduced to around 10GB by transferring the clip_embedder and autoencoder computations to CPU. The only advantage I know for using the long version is to maintain consistency of appearance of the output.

    • @williamlocke6811
      @williamlocke6811 2 місяці тому

      @@Isi-uT YES! That worked perfectly and the results are awesome lol! Am now able to create MUCH longer videos. Just made one with 370 frames, and maybe I can do longer! Thanks so much for your nodes, your help and your hard work :)

  • @DuCaffeine
    @DuCaffeine 2 місяці тому

    Brother, I watched your channel and I have two questions. The first question is, can I upload any video of mine that has a certain movement and will put it on a picture in a very professional manner? The second question is, give me a way to install it, please, brother. By the way, I am a new subscriber. Please, brother, take care of me 💯

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Yes, you can upload any video and the movement will be transferred to the picture, but I cannot guarantee that it will be very professional. Sometimes, extra editing might be needed. Please note that this implementation is for the Windows OS. You can watch a video on the installation here: ua-cam.com/video/NFnhELV4bG0/v-deo.html Or you can install the custom nodes with the ComfyUI Manager by searching for: ComfyUI-UniAnimate-W You can download the required models (about 14GB) from huggingface.co/camenduru/unianimate/tree/main and place them in '\custom_nodes\ComfyUI-UniAnimate-W-main\checkpoints' folder. In case you haven't done so, you can download comfyUI from this link: www.comfy.org/

  • @DuCaffeine
    @DuCaffeine 2 місяці тому

    Brother, I watched your channel and I have two questions. The first question is, can I upload any video of mine that has a certain movement and will put it on a picture in a very professional manner? The second question is, give me a way to install it, please, brother. By the way, I am a new subscriber. Please, brother, take care of me 💯

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Yes, you can upload any video and the movement will be transferred to the picture, but I cannot guarantee that it will be very professional. Sometimes, extra editing might be needed. Please note that this implementation is for the Windows OS. You can watch a video on the installation here: ua-cam.com/video/NFnhELV4bG0/v-deo.html Or you can install the custom nodes with the ComfyUI Manager by searching for: ComfyUI-UniAnimate-W You can download the required models (about 14GB) from huggingface.co/camenduru/unianimate/tree/main and place them in '\custom_nodes\ComfyUI-UniAnimate-W-main\checkpoints' folder. In case you haven't done so, you can download comfyUI from this link: www.comfy.org/

    • @DuCaffeine
      @DuCaffeine 2 місяці тому

      @@Isi-uT Brother, can it be run on Google Colab?

  • @DuCaffeine
    @DuCaffeine 2 місяці тому

    Brother, I watched your channel and I have two questions. The first question is, can I upload any video of mine that has a certain movement and will put it on a picture in a very professional manner? The second question is, give me a way to install it, please, brother. By the way, I am a new subscriber. Please, brother, take care of me 💯

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Yes, you can upload any video and the movement will be transferred to the picture, but I cannot guarantee that it will be very professional. Sometimes, extra editing might be needed. Please note that this implementation is for the Windows OS. You can watch a video on the installation here: ua-cam.com/video/NFnhELV4bG0/v-deo.html Or you can install the custom nodes with the ComfyUI Manager by searching for: ComfyUI-UniAnimate-W You can download the required models (about 14GB) from huggingface.co/camenduru/unianimate/tree/main and place them in '\custom_nodes\ComfyUI-UniAnimate-W-main\checkpoints' folder. In case you haven't done so, you can download comfyUI from this link: www.comfy.org/

  • @Imarcher-s6c
    @Imarcher-s6c 2 місяці тому

    I didn’t know Maki was a real person. W cosplay, and Bostaff skills.❤

    • @Isi-uT
      @Isi-uT 2 місяці тому

      Yeah, she's good!

  • @kleber1983
    @kleber1983 2 місяці тому

    too dificult to follow, I´m sorry...

    • @Isi-uT
      @Isi-uT 2 місяці тому

      You can now easily install the nodes with the ComfyUI manager by searching for: ComfyUI-UniAnimate-W Then you can download the models from this huggingface repository: huggingface.co/camenduru/unianimate/tree/main Place the models in this folder: ComfyUI/custom_nodes/ComfyUI-UniAnimate-W/checkpoints Restart ComfyUI and you're ready to go.

    • @kleber1983
      @kleber1983 2 місяці тому

      @@Isi-uT I've got that far, but the models, and requirements, I wish I could just download them normally, my pip install showed some errors and I'm too noob to fix...

    • @ovijatri1
      @ovijatri1 Місяць тому

      @@Isi-uT no, there is no ComfyUI-UniAnimate-W . of course there is ComfyUI-UniAnimate. but not the w version

    • @Isi-uT
      @Isi-uT Місяць тому

      I initially named the github repository that houses the custom nodes ComfyUI-UniAnimate, but later changed it to ComfyUI-UniAnimate-W after a member of the comfyUI team notified me of a name conflict with a similar repository. You can confirm the new name by visiting github.com/Isi-dev/ComfyUI-UniAnimate-W

    • @ovijatri1
      @ovijatri1 Місяць тому

      @@Isi-uT i already visited this. but in manager the name is without w.

  • @VintageForYou
    @VintageForYou 2 місяці тому

    Have you got the workflow link.

    • @Isi-uT
      @Isi-uT 2 місяці тому

      You can get the workflows from this github repository: github.com/Isi-dev/ComfyUI-UniAnimate-W

    • @VintageForYou
      @VintageForYou 2 місяці тому

      @@Isi-uT Donloaded the workflow from your link but when put in ComfyUI I get this error Unable to find workflow in publish.yml

    • @Isi-uT
      @Isi-uT 2 місяці тому

      The workflows are the .json files at the root of the repository e.g. UniAnimateImg2Vid.json, BasicUniAnimateWorkflow.json. The publish.yml file in the workflow folder located at the root of the repository is for sending updates to the comfy registry. Sorry for the confusion.

  • @sunithasnair4905
    @sunithasnair4905 2 місяці тому

    I can't find that app

    • @Isi-uT
      @Isi-uT 2 місяці тому

      You can find the nodes by searching with the following name in ComfyUI manager : ComfyUI-Img2DrawingAssistants

  • @숙회지
    @숙회지 3 місяці тому

    The pose is loaded perfect. but "use_libuv was requested but PyTorch was build without libuv support" corresponding runtime error has occurred. I use pytorch 2.3.1 for cuda12.1. I'm on windows 10. Is the version of CUDA the problem? I want use this node.

    • @Isi-uT
      @Isi-uT 3 місяці тому

      The only time I remember getting this error was when I mistakenly updated the python_embeded env used by ComfyUi portable with a pytorch cpu version. First check the version of your pytorch to be sure it's a GPU version e.g 2.3.1+cul121

    • @숙회지
      @숙회지 3 місяці тому

      @@Isi-uT i will try! thanks!

  • @NB_nobody
    @NB_nobody 3 місяці тому

    I somehow installed everything correct. But looks like my GTX 1060 is not able to handle it.

    • @Isi-uT
      @Isi-uT 3 місяці тому

      It might not be able to handle image to video. Did you try the image repose workflow?

    • @NB_nobody
      @NB_nobody 3 місяці тому

      @@Isi-uT I tried both, but It gets stuck during processing. I feel it should at least help with the image-to-pose workflow on a low-end GPU. Could you please create a detailed tutorial on the image repose workflow? Thank you. It would be great if there were a simple, dedicated node for image repose instead of dealing with video and frame rates, etc.

    • @Isi-uT
      @Isi-uT 3 місяці тому

      I will make a node specifically for changing the pose of an image soon, with more options. But note that the UniAnimate model was actually trained for image to video, so both the video and image workflows make use of the same models, resulting in the high VRAM requirement irrespective of the workflow. And of course more frames mean more VRAM. For now you can try reducing the sizes of both images and upscale later.

    • @NB_nobody
      @NB_nobody 3 місяці тому

      @@Isi-uT Thank you so much for your reply. Looking forward to the pose to image node.

  • @williamlocke6811
    @williamlocke6811 3 місяці тому

    GOT IT WORKING! :) I re-downloaded your respiratory and started with a fresh install of Comfy. Couldn't get it working on my main Comfy install for some reason. My big mistake was not understanding that Comfy has its own embedded python environment. I wasted hours installing things in CMD and in conda virtual environments, but nothing was working because.... Comfy has it's own embedded environment. I am learning ^_^ Thanks so much for your nodes and for your help! :) I was able go beyond the max 64 frame limit by editing unianimate_inferency py but then the limiting factor quickly becomes VRAM. I bet with a bit more tinkering I could ...

    • @Isi-uT
      @Isi-uT 3 місяці тому

      I'm happy you got it working. We live we learn. All the best.

  • @Isi-uT
    @Isi-uT 3 місяці тому

    In case you can't click on the links in the description, here they are: ComfyUI Installation: comfy.org Github repository: github.com/Isi-dev/ComfyUI-UniAnimate-W Huggingface repository for the models: huggingface.co/camenduru/unianimate/tree/main

    • @ovijatri1
      @ovijatri1 Місяць тому

      Oh... now i can find it in comfyui manager. Thanks for your time.

    • @Isi-uT
      @Isi-uT Місяць тому

      Thank you.

  • @Rachelcenter1
    @Rachelcenter1 3 місяці тому

    and the linked url is cut off in your description underneath the video

    • @Isi-uT
      @Isi-uT 3 місяці тому

      You can find the linked urls by clicking on ...more in the description.

    • @Rachelcenter1
      @Rachelcenter1 3 місяці тому

      @@Isi-uT i did. thats where its cut off. see for yourself

    • @Isi-uT
      @Isi-uT 3 місяці тому

      @@Rachelcenter1 Sorry I didn't understand you before. Thanks for pointing this out. It seems youtube shortens the links for some reason. I made a little modification so that clicking them should take you to the right address.

  • @williamlocke6811
    @williamlocke6811 3 місяці тому

    Can you tell me what versions of the following you are running? I'll try to match them on a fresh installation of Comfy. CUDA Toolkit: cuDNN Library: PyTorch: Python: xformers: You said you spent some time juggling dependencies yourself. Those are just the ones I THINK matter. Please include anything else that your experience tells you are important. I'm on windows 10.

    • @Isi-uT
      @Isi-uT 3 місяці тому

      CUDA Toolkit: 11.8 cuDNN Library: 8.9.7.29 PyTorch: 2.3.1 Python: 3.10 xformers: 0.0.27 Specify the xFormer version while installing e.g. xFormers==0.0.27 (compatible with torch 2.3.1) or xFormers==0.0.20 (compatible with torch 2.0.1)

  • @williamlocke6811
    @williamlocke6811 3 місяці тому

    Thanks for this! I've been at it for a couple hours and it's been tricky. I just can't seem to get the following to all work together... CUDA Toolkit: Version 11.8 cuDNN Library: Version 8.8 PyTorch: Version compatible with CUDA 11.8 (e.g., PyTorch 2.4.0) NVIDIA Drivers Python: Version 3.10 or later xformers If used, compatible with your PyTorch version ComfyUI Does that list of dependencies look right to you? I install Xformers, comfy tells me that the version of xformers I installed isn't compatible with my version of pytorch. I install a new version of pytorch but then Comfy isn't happy. I uninstall xformers, then Comfy doesn't see torch anymore. etc etc etc. Round and round I go. Gonna take a break and then try again with a fresh install of ComfyUI. Do you recommend installing a new ComfyUI in its own conda env?

    • @Isi-uT
      @Isi-uT 3 місяці тому

      Your environment looks okay, although I am not certain of Pytorch 2.4. I successfully ran unianimate with Pytorch 2.3.1 which is compatible with xFormers==0.0.27, and pytorch 2.0.1 which is compatible with xFormers==0.0.20. I recommend you specify the versions while installing the libraries otherwise pip will take you in circles.

    • @Isi-uT
      @Isi-uT 3 місяці тому

      Please let me know if you were successful in running the code with the fresh install of ComfyUI. And in case you didn't re-download the github repo before following this installation video, I suggest you replace all the contents in your requirements.txt with the following (Assuming you have installed torch 2.3.1): numpy==1.26.0 opencv-python==4.9.0.80 pytorch_lightning==2.3.0 lightning_utilities==0.10.0 lightning_fabric==2.3.0 torchmetrics==1.3.0.post0 xformers==0.0.27 einops==0.7.0 onnxruntime==1.18.0 open-clip-torch==2.24.0 fairscale==0.4.13 easydict==1.11 imageio==2.33.1 matplotlib>=3.8.2 args You can then run requirements.txt in your Comfy Environment.

    • @williamlocke6811
      @williamlocke6811 3 місяці тому

      @@Isi-uT That's great! I will try again tomorrow after work. I will let you know if I get it working, or not. Thanks again :)

    • @williamlocke6811
      @williamlocke6811 3 місяці тому

      @@Isi-uT OK, I GOT IT WORKING! I started with a fresh install of ComfyUI. I then downloaded your repository and ran comfyui. I installed the missing nodes via manager. I didn't re-download the 14GB of models, I just copied them over from earlier. And I made sure to run the requirements.txt file in your custom node folder. Here's a short list of the dependencies that need to line up and what is currently working for me... Python - 3.11.9 Pytorch - 2.3.1+cu121 Torchvision - 0.18.1+cu121 CUDA - 12.1 Xformers - 0.0.27 And I learned a lot and hopefully anyone who is struggling will read this won't make the same mistakes I made. My BIG MISTAKE was not understanding that ComfyUI has it's own embedded python environment. I wasted hours. I was installing things via CMD in the global environment. I was installing things in the (base) Conda environment. And none of it was working because ...ComfyUI has it's own embedded python environment. ....but FINALLY I got it working... Now I can play with UniAnimate and your custom nodes - THANK YOU! for all your help :)

  • @williamlocke6811
    @williamlocke6811 3 місяці тому

    Well, I was up all night trying to figure this out. But no luck. I was able to get the nodes to load, but strangely the run_align_posepy node didn't generate poses when in the UniAnimate main folder. I had to move it into the main custom_nodes folder and then it worked. But when I get to the "Animate Image with UniAnimate" node I get all kinds of different errors. Hopefully you will have time to update your repository/make an installation video. I look forward to it and thanks for all your effort on making those nodes for us :)

    • @Isi-uT
      @Isi-uT 3 місяці тому

      The dependencies of UniAnimate are quite challenging to reconcile. It took me about 2 days to get it working on ComfyUI. I will release an installation video as soon as the node is made available on the ComfyUI Manager. I have initiated the process.

    • @williamlocke6811
      @williamlocke6811 3 місяці тому

      @@Isi-uT That's great! If UniAnimate works as well as the examples shown, your video is going to become very popular. You might be the only one working on this, for now. "Animate Anyone" required very advanced workflows and a lot of luck just to get so so results. Good luck to you and your new TY channel :)

    • @Isi-uT
      @Isi-uT 3 місяці тому

      @@williamlocke6811 Thank you. There's another ComfyUI-UniAnimate repo (github.com/AIFSH/ComfyUI-UniAnimate). The delay in the availability of the nodes through the ComfyUI Manager was due to a repository name conflict with the aforementioned repo. The nodes are now available through the Manager, but if you are still interested in seeing an installation video, then check the link I just added to the description of this video.

  • @Rachelcenter1
    @Rachelcenter1 3 місяці тому

    seems like the error is with xformers. it wont let me install it because it says ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (xFormers). note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for xFormers So I tried someones suggestion about installing gcc@7 in order to fix the wheel issue, but when I tried to install it on terminal, it told me: Error: gcc@7 has been disabled because it is deprecated upstream! It will be disabled on 2024-02-22. one github page said "Most of XFormers components will only work on Nvidia GPUs. We have no plans to support other platforms at this point (either M2 or AMD GPUs)" (I'm on a Mac M2 computer)

    • @Isi-uT
      @Isi-uT 3 місяці тому

      You're right. UniAnimate depends on xFormers for optimization. I tested the code on a Windows OS and I don't have access to a Mac, so I am not familiar with issues on that OS.

    • @Rachelcenter1
      @Rachelcenter1 3 місяці тому

      @@Isi-uT I wonder if you could create a version that doesnt require Xformers

    • @Isi-uT
      @Isi-uT 3 місяці тому

      @@Rachelcenter1 I can't think of a way to make it work without xFormers for now.

  • @ismailmarzuki-qt1oj
    @ismailmarzuki-qt1oj 3 місяці тому

    Can I run it smoothly using an RTX 3060 12GB VRAM, and 16GB RAM system?

    • @Isi-uT
      @Isi-uT 3 місяці тому

      Yes you can. That's exactly my system's specs.

  • @williamlocke6811
    @williamlocke6811 3 місяці тому

    Small problem. I downloaded your repository, installed the requirements, downloaded the models and manually moved them to the correct folder. I uploaded the basicUniAnimateworkflow into Comfy and it told me I was missing 2 nodes - Gen_align_pose and UniAnimateImage. The ReadMe said I should install the missing nodes using the Comfy Manager but it couldn't find them. I also thought maybe they were hiding somewhere in your repository, but I couldn't find them either. Where can I find those 2 nodes? Thanks for your help, your nodes and your video!

    • @Isi-uT
      @Isi-uT 3 місяці тому

      Please confirm if the downloaded repository is in the '\ComfyUI\custom_nodes\' folder. The nodes are in the 'uniAnimate_Inference.py' file. Your folder structure should look something like: \ComfyUI\custom_nodes\ComfyUI_UniAnimate\uniAnimate_Inference.py & other files.

    • @williamlocke6811
      @williamlocke6811 3 місяці тому

      @@Isi-uT Yes, the ComfyUI-UniAnimate-main folder is in the Comfy custom_nodes folder. In your repository there are 2 .py files other than modeldownloader - uniAnimate_Inference, and run_align_pose. What should I do with them? Chatgpt has failed me :(

    • @Isi-uT
      @Isi-uT 3 місяці тому

      ​@@williamlocke6811 It means your folder structure is correct and the nodes should be visible, except if there are dependency issues. Please check your CLI for errors.

    • @williamlocke6811
      @williamlocke6811 3 місяці тому

      @@Isi-uT Good advice! I checked the CLI and discovered that your nodes require xformers, which I didn’t have installed. After adding xformers, it caused some issues with PyTorch, but I managed to sort it out by reinstalling everything. Now, all your nodes are working perfectly. Thank you for your help-you're a genius! :)

  • @williamlocke6811
    @williamlocke6811 3 місяці тому

    VERY appreciate you creating these nodes! :) Subscribed! Please make a video explaining the installation. It often is the trickiest part. Or can you just share a workflow and make sure "update missing nodes" takes care of everything? Is that possible?

    • @Isi-uT
      @Isi-uT 3 місяці тому

      Thank you for subscribing. I will make a video on the installation soon. The nodes cannot currently be installed through the ComfyUI Manager. I will look into the procedure for making it accessible through the manager. For now, you can download the code from the github repository, extract and place the 'ComfyUI-UniAnimate-main' folder in the 'custom_nodes' folder of your ComfyUI installation. The 'ComfyUI-UniAnimate-main' folder contains the two nodes.

    • @williamlocke6811
      @williamlocke6811 3 місяці тому

      @@Isi-uT I will attempt it. Thank you! And if I fail, I'll wait for your next video :p

    • @DeathMasterofhell15
      @DeathMasterofhell15 Місяць тому

      Subscribed

    • @Isi-uT
      @Isi-uT Місяць тому

      ​@@DeathMasterofhell15Thank you.