@@peacefk YES! Though.. I tried everything, so I'm not sure which finally resolved it. Let me described what I did. So, I tried to run update_comfyui_and_python script in update folder (comfyui portable), and it ran but at the end it has some error saying some pkg isn't compatible, something like : "colpali-engine 0.3.4 requires pillow=9.2.0, but you have pillow 11.1.0 which is incompatible" - So I tried to update colpali-engine ..then it shows some more, and I tried until it's ok.. at the end though, there are still some error like that but I stopped then I went in comfyui and in Manager, I clicked Update All... then I noticed in the console, it says... git is unable to fetch and suggested me to add those custom_node directory as safe.directory... so I did - then I run update-all again from comfyUI Manager node and restarted comfyui... then voila... I see that option for hunyuan-video type You can try that and good luck
i get error: UnetLoaderGGUF Unexpected architecture type in GGUF file, expected one of flux, sd1, sdxl, t5encoder but got 'hyvid' but i have all the files in your video
Can you try updating the ComfyUI and all nodes. Latest ComfyUI has inbuilt support for hunyuan video and LTX (video gen models) so you'll be able to select those rather than flux or SDXL.
@@dsphotos did you see any error when you tried to update everything? like come incompatible in python library for example? tried to update those too? I know when I did that update_comfyui_and_python script thingy... it couldn't continue because of those errors... so I updated those manually... then I went in Manger node in comfyui and did update all... that fixed this issue for me
@@VuTCNguyenArtist i will try, but i kind of gave up on hunyuan. You should definetly try wavespeed its an amazing boost even on olde pcs for comy ui even for video generation with LTX and Flux :-)
is that the dual clip loader (gguf) node? If so, I too got the same issue... I resolved it by updating comfyUI... but while it said comfyUI is already up-to-date... not all the custom nodes are up-to-date... so I went in Manager node in comfyUI and did "update all" ... that fixed it.
It's interesting that none of my browsers (including the standard Microsoft Edge) allow me to download the workflow, it says: "The file contains malicious code, download canceled) I had to open it with notepad and copy it
I have a question: I have a nvidia RTX3060 on my latop. I have dedicated GPU memory of 6 GB on the nvidia card. But I have 15.8 Shared GPU memory in total. How Can I somehow use the total 16GB Vram? If I run comfyui with the cuda bat, my PC only uses 6GB mem of my nvidia card. my other graphics card on the laptop is intel UHD graphics with dual channel memory, my CPU i7 an RAM 32GB. thanks
The shared memory you see is not exactly GPU memory, it's like the intel GPUs do not have their own memory so they work with RAM and CPU to carry out all the GPU intensive tasks but they are not at the same level of Dedicated GPU. Also as much as I know, AI stuff like ComfyUI does need Cuda which is present in just Nvidia GPU for faster generation so you cannot technically use the shared memory to get same results as Dedicated VRAM.
Thank you so much. I followed the steps based on your video. Although I encountered errors due to PyTorch not being up-to-date, after updating it, I was able to create the video successfully. I have a question: how can I create a video based on a specific image?
You encounter missing empty latent node? That's the only error I'm stuck on. Others suggested comfy update, didn't work for me. Haven't found many other ideas.
@@amarissimus29 No, I didn’t encounter any hidden issues. The workflow was working smoothly, but in the final stage of video creation, it showed an error, which was resolved by updating PyTorch.
Please share high quality video generators for 16GB vram GPU users. Thank you. I have Nvidia A4000 and would like to know the best workflow for coherent video generations
Stopped using this and the quantized models because of drop in quality. I'm loading the full 24GB official model into RTX 3080 10GB VRAM. Simply allowing comfy to swap and borrow RAM from my PC is good enough and works nearly as fast. If you have 32 or 64 GB ram then it works quite well even with the full model.
@@altugozhan It's the official workflow from Comfy. It's written on their blog. Look for the article called "Running Hunyuan with 8GB VRAM and PixArt Model Support"
TLDR; It's technically not possible to be nearly as fast, not even close. If you have a lot of system ram then, surely the model can be loaded but performance will degrade significantly. VRams are optimized for high throughput plus they are available on gpu very close to where the actual computation is happening. On the other hand system memory is optimized for low latency. Whenever relevant model weights would be required there will be vram getting released and new weights data getting loaded from system ram. If ram is not sufficient then it can even lead to the app or system crash.
@@abhiabzs Only slightly and not significantly. I already tested this with multiple gpus vs my gpu, including a 4090 and the performance hit isn't even noticeable besides the time delay of a couple of seconds at the start of the process when the swap offload happens.
I am looking for custom video loras such as this.. Amazing video. I want to do some more custom loras.. Can u share ur mail id. So that i can share my requirements in detail.
Thank you! Excelent workflow! Using the Q8 model and the quality is impressive.
How do i get "Set_Vae" and "Set_video" node ? its missing and cant find where to download it.
search for kjnodes
Install KJNodes from a manager.
@@MarioSedlak i have installed KJNodes already and i update it . But it still showing missing with red boundaries.
@@MarioSedlak worked for me, thanks!
I'm unable to set the clip type to hunyuan_video on that GUFF Clip node... I have updated comfyUI too...
Exact problem, no solution . Did you find anything mate ?
@@peacefk YES! Though.. I tried everything, so I'm not sure which finally resolved it. Let me described what I did.
So, I tried to run update_comfyui_and_python script in update folder (comfyui portable), and it ran but at the end it has some error saying some pkg isn't compatible, something like : "colpali-engine 0.3.4 requires pillow=9.2.0, but you have pillow 11.1.0 which is incompatible" - So I tried to update colpali-engine ..then it shows some more, and I tried until it's ok.. at the end though, there are still some error like that but I stopped
then I went in comfyui and in Manager, I clicked Update All... then I noticed in the console, it says... git is unable to fetch and suggested me to add those custom_node directory as safe.directory... so I did - then I run update-all again from comfyUI Manager node and restarted comfyui...
then voila... I see that option for hunyuan-video type
You can try that and good luck
@@peacefk I replied but it didn't show up? explaining what I did to resolve the issue.
Cannot execute because a node is missing the class_type property.: Node ID '#122' = any solution for this?
Cannot execute because a node is missing the class_type property.: Node ID '#122'
all is newest version
Thank you very much for the information, excellent content!
good job, work fine
i get error: UnetLoaderGGUF
Unexpected architecture type in GGUF file, expected one of flux, sd1, sdxl, t5encoder but got 'hyvid' but i have all the files in your video
Can you try updating the ComfyUI and all nodes. Latest ComfyUI has inbuilt support for hunyuan video and LTX (video gen models) so you'll be able to select those rather than flux or SDXL.
Doesent work, updated everything, gguf loader works when generating images, but does not work with hunyuan gguf, very strange
@@xclbrxtra same here
@@dsphotos did you see any error when you tried to update everything? like come incompatible in python library for example? tried to update those too? I know when I did that update_comfyui_and_python script thingy... it couldn't continue because of those errors... so I updated those manually... then I went in Manger node in comfyui and did update all... that fixed this issue for me
@@VuTCNguyenArtist i will try, but i kind of gave up on hunyuan. You should definetly try wavespeed its an amazing boost even on olde pcs for comy ui even for video generation with LTX and Flux :-)
Cannot execute because a node is missing the class_type property.: Node ID '#122'
help
same problem : /
is that the dual clip loader (gguf) node? If so, I too got the same issue... I resolved it by updating comfyUI... but while it said comfyUI is already up-to-date... not all the custom nodes are up-to-date... so I went in Manager node in comfyUI and did "update all" ... that fixed it.
What folder does the GGUF go in?
It's interesting that none of my browsers (including the standard Microsoft Edge) allow me to download the workflow, it says: "The file contains malicious code, download canceled) I had to open it with notepad and copy it
Pff 🎉
i have 3060 with 12gb how can i modify the workflow to use more vram ?
Download better GGUF models to get better results in workflow like hunyuan video Q6 GGUF
It looks like Tencent recently recently an official FP8 version of their model. Would this be better than the city96 quants?
i got 8GB VRAM and still got torch out of memory, anyone one know solution for this?
I have 16Ggb and still run out of memory..
Bro why not use sageattention to speed up the task
I have a question: I have a nvidia RTX3060 on my latop. I have dedicated GPU memory of 6 GB on the nvidia card. But I have 15.8 Shared GPU memory in total. How Can I somehow use the total 16GB Vram? If I run comfyui with the cuda bat, my PC only uses 6GB mem of my nvidia card. my other graphics card on the laptop is intel UHD graphics with dual channel memory, my CPU i7 an RAM 32GB. thanks
The shared memory you see is not exactly GPU memory, it's like the intel GPUs do not have their own memory so they work with RAM and CPU to carry out all the GPU intensive tasks but they are not at the same level of Dedicated GPU. Also as much as I know, AI stuff like ComfyUI does need Cuda which is present in just Nvidia GPU for faster generation so you cannot technically use the shared memory to get same results as Dedicated VRAM.
try with new comfyui node..called someting like _ Comfyui Multy GPU, you can conect many gpus for any box in a comfyui...
In what folder the Clip GGUF goes?
ComfyUI/models/clip
what is the best for custom lora character, this workflow o LTXV worflow?
If looking for quality and coherency then Hunyuan is way ahead, but resource heavy.
Thank you so much. I followed the steps based on your video. Although I encountered errors due to PyTorch not being up-to-date, after updating it, I was able to create the video successfully.
I have a question: how can I create a video based on a specific image?
You encounter missing empty latent node? That's the only error I'm stuck on. Others suggested comfy update, didn't work for me. Haven't found many other ideas.
@@amarissimus29 No, I didn’t encounter any hidden issues.
The workflow was working smoothly, but in the final stage of video creation, it showed an error, which was resolved by updating PyTorch.
@@LeonAzevedo-ym3dq which version of pytorch
Do you know if it has Image to Video yet?
The model diffuser I2V hasn't been released for Hunyuan yet. They said it should be sometime this month.
Please share high quality video generators for 16GB vram GPU users. Thank you. I have Nvidia A4000 and would like to know the best workflow for coherent video generations
Just download a better hunyuan model like Q6 and the workflow remains same 💯
which models are optimal for 12 GB?
Haven't tried it yet but based on filesize, I'll be trying Q5 and Q6 first.
Stopped using this and the quantized models because of drop in quality. I'm loading the full 24GB official model into RTX 3080 10GB VRAM. Simply allowing comfy to swap and borrow RAM from my PC is good enough and works nearly as fast. If you have 32 or 64 GB ram then it works quite well even with the full model.
can you share your workflow pls?
@@altugozhan It's the official workflow from Comfy. It's written on their blog. Look for the article called "Running Hunyuan with 8GB VRAM and PixArt Model Support"
@@altugozhan It's posted on the official Comfy blog.
TLDR; It's technically not possible to be nearly as fast, not even close.
If you have a lot of system ram then, surely the model can be loaded but performance will degrade significantly.
VRams are optimized for high throughput plus they are available on gpu very close to where the actual computation is happening.
On the other hand system memory is optimized for low latency.
Whenever relevant model weights would be required there will be vram getting released and new weights data getting loaded from system ram. If ram is not sufficient then it can even lead to the app or system crash.
@@abhiabzs Only slightly and not significantly. I already tested this with multiple gpus vs my gpu, including a 4090 and the performance hit isn't even noticeable besides the time delay of a couple of seconds at the start of the process when the swap offload happens.
Which model you suggest for 24gb vram?
Also using sage attention really helps with hunyuan
Rtx 3090 24gb
Damn aren't City96 amazing.
I have a GT1030 with 2GB of VRAM that they say will work or explode 😂
crash 8 vram
If it crashes once, just queue it again, trying out once or twice actually makes it run. And try to exit any other application using GPU.
wo, is there img2videio?
I am looking for custom video loras such as this.. Amazing video. I want to do some more custom loras.. Can u share ur mail id. So that i can share my requirements in detail.