Awesome video! Heads up, might want to run a filter on your voice audio, lots of tiny natural clicks. You have a nice voice, audio just needs an EQ or somethin
Can't wait for tensorRt to be compatible with more stuff, I think you can do LoRas for the same model on A1111 not sure if the models are interchangeable between computers or they are tied to the computer specs where you create the model. Nice 🎶.
In Automatic 1111 You can bake in SD 1.5 Lora's but SDXL Lora's are broken. or you can use the Lora branch but lora's are weaker when used with TensorRT than they normally are.
@@jibcot8541 Yes, guess we have to keep waiting for better integration
10 місяців тому
Thank you very much for your information and effort. I did everything you said and it worked but there is a problem. It gave almost the same performance with the normal setting. My video card is RTX 3060/ 6 gb/mobile Only if you increase the size and make Batch size 4, it gives a few seconds advantage. That's all. I also could not convert turbo models. I guess tensorRT doesn't support it.
The first time you generate with tensor rt the speed won't be noticeable, but if you tweak one setting like bumping up the steps by 1, while keeping the prompt the same, you should then notice the speed boost. And yes running larger batch sized will show this speed gain. With regards to making turbo models into tensor rt versions it works, but I noticed I needed to add a flag as I mentioned on this issue I submitted on the repo github.com/phineas-pta/comfy-trt-test/issues/4
this helped me setup Stream Diffusion for Comfy. for those interested - it does run a lot faster but you then need to consider your card cooling. Seeing realtime SD with your own settings is something else - its one thing watching some dudes waifus dance or spaghetti being eaten - but if you have your own goals and are doing some texture work, backgrounds, color, abstract stuff, maybe even mundane photos where noone will be able to tell its ai..this sh is insane. Even sora and etc is a bit too obvious. Thanks again @boricuapabaiartist
I do a different comfy ui install via a miniconda env when I want to test some custom nodes that will require python packages that the vanilla portable version doesn't come with to test these newer custom nodes and avoid any issues that could occur when using the portable comfy version
This was such a pain to figure out before. Thanks for the easy instructions.
Awesome video! Heads up, might want to run a filter on your voice audio, lots of tiny natural clicks. You have a nice voice, audio just needs an EQ or somethin
Can't wait for tensorRt to be compatible with more stuff, I think you can do LoRas for the same model on A1111 not sure if the models are interchangeable between computers or they are tied to the computer specs where you create the model. Nice 🎶.
In Automatic 1111 You can bake in SD 1.5 Lora's but SDXL Lora's are broken. or you can use the Lora branch but lora's are weaker when used with TensorRT than they normally are.
@@jibcot8541 Yes, guess we have to keep waiting for better integration
Thank you very much for your information and effort. I did everything you said and it worked but there is a problem. It gave almost the same performance with the normal setting. My video card is RTX 3060/ 6 gb/mobile
Only if you increase the size and make Batch size 4, it gives a few seconds advantage. That's all.
I also could not convert turbo models. I guess tensorRT doesn't support it.
The first time you generate with tensor rt the speed won't be noticeable, but if you tweak one setting like bumping up the steps by 1, while keeping the prompt the same, you should then notice the speed boost. And yes running larger batch sized will show this speed gain. With regards to making turbo models into tensor rt versions it works, but I noticed I needed to add a flag as I mentioned on this issue I submitted on the repo
github.com/phineas-pta/comfy-trt-test/issues/4
i cant see the comfyui trt node :c
this helped me setup Stream Diffusion for Comfy.
for those interested - it does run a lot faster but you then need to consider your card cooling. Seeing realtime SD with your own settings is something else - its one thing watching some dudes waifus dance or spaghetti being eaten - but if you have your own goals and are doing some texture work, backgrounds, color, abstract stuff, maybe even mundane photos where noone will be able to tell its ai..this sh is insane. Even sora and etc is a bit too obvious. Thanks again @boricuapabaiartist
Hey why did you create a new comfy ui installation?
I do a different comfy ui install via a miniconda env when I want to test some custom nodes that will require python packages that the vanilla portable version doesn't come with to test these newer custom nodes and avoid any issues that could occur when using the portable comfy version