ComfyUI: nVidia TensorRT (Workflow Tutorial)

Поділитися
Вставка
  • Опубліковано 30 гру 2024

КОМЕНТАРІ • 37

  • @controlaltai
    @controlaltai  6 місяців тому +4

    TensorRT is more complicated than what I have covered. I have restricted the explain in relevancy to Comfy UI and Stable Diffusion. LoRA works, ControlNet support is not added yet. Hope the video is helpful.

    • @adisatrio3871
      @adisatrio3871 6 місяців тому

      could you make a video to compare between, Turbo, LCM, RT, FreeU, and which combination are compatible and the best in terms of speed and quality?

    • @controlaltai
      @controlaltai  6 місяців тому

      That won’t help. Training the tensor takes like only 3 to 5 minutes. Whatever figures I show is highly dependant on my system. I have shown some benchmarks in the video. As far as I know for turbo the gain is least. I suggest you just train the engine and throw it in your workflow to test. Tensor RT has zero effect on quality. If you optimize a turbo model, the quality is based on that model. TensorRT only impacts the performance.

    • @adisatrio3871
      @adisatrio3871 6 місяців тому

      @@controlaltai so is it possible to get way more better performance combining, turbo model + LCM + RT?

    • @controlaltai
      @controlaltai  6 місяців тому

      @@adisatrio3871 I have not tried with LCM. Just tried with turbo. I doubt it would make any difference. Since only with turbo gets me 14 to 15 percent faster on average, on my 4090. Maybe lower cards have more performance boost.

  • @onebitterbit
    @onebitterbit 6 місяців тому

    Happy to be a member... Thank you for all your work!!

  • @fraz66
    @fraz66 6 місяців тому

    I don't often leave comments but I had to for this video! I came in just wanting to learn about Tensor RT(which I did) but I also came out with a killer All-In-One workflow! Seriously, this is great, thank you!

  • @Hycil2023
    @Hycil2023 5 місяців тому

    context is the multiplier for 75 token size, setting the max to 128 will slow down tensorRT a lot and increase vram, for average use case it should be set to around 2-4 (about several dozen tags worth)

  • @moviecartoonworld4459
    @moviecartoonworld4459 6 місяців тому +1

    I am always grateful for the best lectures. I may have missed something, but the explanation about Static batch size is missing.

    • @controlaltai
      @controlaltai  6 місяців тому +1

      You can put static batch size to whatever you want. If you put a number you can use the engine only with that batch size. Typically 1, 2 or 4 as per your requirement

    • @moviecartoonworld4459
      @moviecartoonworld4459 5 місяців тому

      @@controlaltai thank you!!!!!

  • @onebitterbit
    @onebitterbit 6 місяців тому

    Error occurred when executing TensorRTLoader:
    'NoneType' object has no attribute 'create_execution_context', only on SD3 workflow, when try to load the tensorrt... any suggestion? Thanks!!

    • @controlaltai
      @controlaltai  6 місяців тому

      Are you using the sd3 shared workflow or have you created one?

    • @onebitterbit
      @onebitterbit 6 місяців тому

      @@controlaltai shared

    • @controlaltai
      @controlaltai  5 місяців тому

      Are you using the sd3 medium vae? Also have you trained the engine on your system or using the shared engine. The shared engine file will work only on a 4090 system.

  • @Kikoking-y9b
    @Kikoking-y9b 5 місяців тому

    Hi. I need a trick to use higher gpu Vram. For example i have 2 PC and want to render with both. Do you know a solution for that? There is a node called NetDist, but its very bad documented i am lost. Maybe you are smarter than me and could help somehow ?

  • @takaikioshi9711
    @takaikioshi9711 5 місяців тому

    can you generate a picture and upload it to a folder that we can download from? this would allow us to use the workflow easily without having to build it all manually.

    • @controlaltai
      @controlaltai  5 місяців тому

      Sorry, ready workflows are only available for paid UA-cam channel members. However I don’t hide anything in the UA-cam video tutorial. It just something extra I give to who support that channel monetarily (over and above the standard views, likes support).

  • @ultimategolfarchives4746
    @ultimategolfarchives4746 5 місяців тому

    Hello sir. Were you able to make it work with cosxl models?

    • @controlaltai
      @controlaltai  5 місяців тому

      Hi. Doesn't work. Not compatible for the moment. Tried and tested.

    • @ultimategolfarchives4746
      @ultimategolfarchives4746 5 місяців тому

      @@controlaltai Sounds good. Thank you for your answer. Been following you for a little while now, and your videos are gold sir.
      Thanks

  • @andrejlopuchov7972
    @andrejlopuchov7972 6 місяців тому +1

    Does tensor rt work with animatediff already by any chance?

    • @controlaltai
      @controlaltai  6 місяців тому +1

      Not tested. Will test and let you know.

    • @andrejlopuchov7972
      @andrejlopuchov7972 5 місяців тому

      @@controlaltaitested?

    • @controlaltai
      @controlaltai  5 місяців тому

      Sorry could not, give me a day or two will get back.

    • @controlaltai
      @controlaltai  5 місяців тому +1

      No its not practical for animate diff at this moment. ControlNet doesn't work, it would be highly restrictive. You can train the main checkpoint with your resolution. But the workflow would be extremely limiting seeing how ControlNet is very much used with animate diff.

  • @hleet
    @hleet 6 місяців тому +1

    "ComfyUI TensorRT engines are not yet compatible with ControlNets or LoRAs. Compatibility will be enabled in a future update."
    Not interesting for the moment

  • @DoctorMandible
    @DoctorMandible 6 місяців тому

    Turbo and sd3 are closed licensed. Wgaf about them except hobbyists? And sdxl lightning is open license but you ignored it. Very disappointing.

    • @controlaltai
      @controlaltai  6 місяців тому

      Lightning is not supported. Already mentioned what models are supported.