@@emergc load the workflow and look at the default models. Then look at the link in the description and you can find all the models there. Grab the correct one and put it in the diffusion models folder
@@cognibuild the model seems to have been downloaded but causes an OOM. I have 12Gb VRAM, what model will work with this amount of VRAM? Thank you for replying.
@@emergc yeah your 12 just might not be enough unfortunately. Two options miay work, each which dont include ip2v 1) use the t2v low vram workflow 2) check out the gguf installation and get a q3 model
Hey, i installed hunyuan. While im generating something, it loads upto 50% or something and suddenly the cmd closes and pop up comes saying "reconnecting". Tried many times but it is still not working. Any solution?
Have you noticed after one generation, you cant do another one unless restart via manager? Doesnt say OOM just Prompt Executed. Hopefully it gets the speed of LTX soon. Hope Kijai accepts the pull request for Teacache soon.
i havent had that problem although I d with LTX before the update. Im playing with Fast Hunyuan right now which is definitely faster.. but i feel loses quality. Its defdinity interesting
@@BoyFromNyYT ah.. yeah pain in the butt. Heres what i do.. make a change and look at the number in the error.. then make a one tick change and see how the error changes.. keep doing that till it gets smaller and then works
In fact, this is an old config that still does not work on 10 GB of video memory (( LTX at least works, but the results are very mediocre. I have never been able to generate a video from a image in Hunyuan. Only text-to-video work. We need to adapt to GGUF this config.
I watched for the info, but stayed for what an enigma you are. Good video sir.
I appreciate that ;D
I Love this channel, it feels authentic, I hope this guy gets all the good stuff working hard on it
Thanks for bringing this new capability to our awareness. It works... it's a non-exact science, but is fun to play with.
thats a great way to put it.. i say its still "computer science" and not quite "magic" yet
and btw.. yeah its way fun to play with isnt it?! Come share your stuff on Discord and let us see
What setup do you have? I’ve been looking to upgrade mine for ai usage and yours seems to run ai video well.
It causes an error when the LLM model is being loaded and then an out or memory message appears. What LLM model should I download and use?
@@emergc load the workflow and look at the default models. Then look at the link in the description and you can find all the models there. Grab the correct one and put it in the diffusion models folder
@@cognibuild the model seems to have been downloaded but causes an OOM. I have 12Gb VRAM, what model will work with this amount of VRAM? Thank you for replying.
@@emergc yeah your 12 just might not be enough unfortunately. Two options miay work, each which dont include ip2v
1) use the t2v low vram workflow
2) check out the gguf installation and get a q3 model
Hey, i installed hunyuan. While im generating something, it loads upto 50% or something and suddenly the cmd closes and pop up comes saying "reconnecting". Tried many times but it is still not working. Any solution?
hm.. uncertain.. maybe OOM?
Have you noticed after one generation, you cant do another one unless restart via manager? Doesnt say OOM just Prompt Executed.
Hopefully it gets the speed of LTX soon. Hope Kijai accepts the pull request for Teacache soon.
i havent had that problem although I d with LTX before the update.
Im playing with Fast Hunyuan right now which is definitely faster.. but i feel loses quality. Its defdinity interesting
@cognibuild unfortunately its only available for 4090 i believe...on 3090...
I do like getting learned in my python ai by a cowboy.
;D
Finally… by the way I never got the video to video to work
kk.. what happened when you tried?
@ I just get the it needs to be a multiple of error and I tired forcing frames
@@BoyFromNyYT ah.. yeah pain in the butt. Heres what i do.. make a change and look at the number in the error.. then make a one tick change and see how the error changes.. keep doing that till it gets smaller and then works
@ ok
In fact, this is an old config that still does not work on 10 GB of video memory ((
LTX at least works, but the results are very mediocre. I have never been able to generate a video from a image in Hunyuan. Only text-to-video work.
We need to adapt to GGUF this config.
yeah gguf could possibly work but unfort i do not believe there is a gguf version of the ip2v workflow
Btw we would appreciate the time stamps
@@generated.moment I already made them. They're in the description
Hi mr trump