How did you build sd3 tensor rt static? I can't figure it out, did you use the trt engine sdxl base build or did you find a build trt sd3 workflows? help me I beg lol@@EdwinBraun
Yes that would be nice. However it seems the amount of drivers,hardware and CUDA models is way too complex. It is pure chaos out there right now. Just look at PyTorch and CUDA versions and how this sometimes just does not work. Maybe AI can save us :)
@@cebasVTHow did you build sd3 tensor rt static? I can't figure it out, did you use the trt engine sdxl base build or did you find a build trt sd3 workflows? help me I beg lol
15:57 okay you're the real deal if you can fine tune like that on the fly from pure experience lol, that's awesome
Salut comment a tu fait pour SD3 je ne trouve pas le .json pour build la version static?
Where is the model after you've built it?
It saves it to the same folder.
nice! is it compatible with LoRas, controlnets, etc.?
It should as it just replaces the checkpoint, so everything after should work just fine.,
How to make .engine sd3 ? Pls
What you mean? I show the process so far it is all the options you have.
@@EdwinBraun i dont have the Tensor rt SD3 , i m build sdxl turbo good , but i dont build thé et sd3
How did you build sd3 tensor rt static? I can't figure it out, did you use the trt engine sdxl base build or did you find a build trt sd3 workflows? help me I beg lol@@EdwinBraun
This is all nice
but it just add complexity which is not
optimimal nvidia could host all models for all their GPUs in optimal format
Yes that would be nice. However it seems the amount of drivers,hardware and CUDA models is way too complex. It is pure chaos out there right now. Just look at PyTorch and CUDA versions and how this sometimes just does not work. Maybe AI can save us :)
@@cebasVTHow did you build sd3 tensor rt static? I can't figure it out, did you use the trt engine sdxl base build or did you find a build trt sd3 workflows? help me I beg lol