A1111: nVidia TensorRT Extension for Stable Diffusion (Tutorial)

Поділитися
Вставка
  • Опубліковано 21 гру 2024

КОМЕНТАРІ • 67

  • @saadestudiofxssfx6039
    @saadestudiofxssfx6039 10 місяців тому +2

    Excellent tutorial thanks a lot

  • @Senpaix3
    @Senpaix3 6 місяців тому +1

    It's installing Optimum on mine, and it's stuck for awhile now. What should I do?

  • @saraartemi
    @saraartemi 7 місяців тому

    Hi! How do you ignore the warning (DEPRECATION) you show at 10:11 minutes?
    What keys do you press?

    • @controlaltai
      @controlaltai  7 місяців тому +1

      You literally have to do nothing and let it proceed on its own.

  • @vruser8430
    @vruser8430 Рік тому +1

    the error comes with minute 13:16 fixing xformer ... this is the message: Installing collected packages: torch, xformers
    Attempting uninstall: torch
    Found existing installation: torch 2.0.1+cu118
    Uninstalling torch-2.0.1+cu118:
    Successfully uninstalled torch-2.0.1+cu118
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.1.1+cu121 which is incompatible.
    Successfully installed torch-2.1.1+cu121 xformers-0.0.23
    so we stuck..

    • @saadestudiofxssfx6039
      @saadestudiofxssfx6039 10 місяців тому

      I had this same problem and you solve it with this explanation ua-cam.com/video/ubkOCjL0UB8/v-deo.html

  • @johnny.monteiro
    @johnny.monteiro 11 місяців тому

    My TensorRT Tab is now missing, don't know whatsup with that....

    • @controlaltai
      @controlaltai  11 місяців тому +1

      They released a new update 2 or 3 weeks ago. Maybe it’s something to do with the update, or maybe there was an update to 1111. Try ensuring both are updated. I personally haven’t checked on it.

  • @DjDiversant
    @DjDiversant Рік тому +4

    Thanks a lot 🎉

  • @xBennyx
    @xBennyx 11 місяців тому

    How do i go about solving this after installing TensorRT the second time around.
    To create a public link, set `share=True` in `launch()`.
    Creating model from config: C:\AI\sd.webui\webui\configs\v1-inference.yaml
    Startup time: 7.5s (prepare environment: 1.6s, import torch: 2.7s, import gradio: 0.7s, setup paths: 0.7s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 0.6s, create ui: 0.4s, gradio launch: 0.3s).
    Applying attention optimization: Doggettx... done.
    Model loaded in 2.5s (load weights from disk: 0.5s, create model: 0.2s, apply weights to model: 1.4s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.1s).
    *** Error running install.py for extension C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT.
    *** Command: "C:\AI\sd.webui\webui\venv\Scripts\python.exe" "C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py"
    *** Error code: 1
    *** stderr: Traceback (most recent call last):
    *** File "C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py", line 3, in
    *** from importlib_metadata import version
    *** ModuleNotFoundError: No module named 'importlib_metadata'

    • @controlaltai
      @controlaltai  11 місяців тому +1

      When installing the second time, are you removing the physical drive folder? Delete the physical drive folders from both places along with the extension from webui as shown in the video and try again. Let me know how it goes.

    • @Gothdir
      @Gothdir 10 місяців тому

      @@controlaltai Im running into the same error. You mean if we deleted the venv and the extension folder before recreating them all? Yes I definitely did. Sadly the Error still occurs. Man its always such a pain to get this running, tensorrt that is.

    • @controlaltai
      @controlaltai  10 місяців тому

      Yes venv and extension folder. That has to be deleted for a re install.

    • @Gothdir
      @Gothdir 10 місяців тому

      @@controlaltai Yeah did that found a few other possible solution - non worked so I kinda gave up.

  • @FearfulEntertainment
    @FearfulEntertainment 9 місяців тому +1

    after doing the last step it went back to the original error boo...

    • @controlaltai
      @controlaltai  9 місяців тому

      The xformers? RT tensor had a recent update.

    • @Brunobarros-ph3fw
      @Brunobarros-ph3fw 7 місяців тому

      me too. Have u resolved?

    • @controlaltai
      @controlaltai  7 місяців тому

      Skip the xformers......step and try.

  • @enriqueicm7341
    @enriqueicm7341 Рік тому +2

    Thank tou!

  • @daemoniax3788
    @daemoniax3788 10 місяців тому

    things went smothly but when i tried to install xformers(i have it already before but i wanted to make sure to have the latest one so i didn't unistall xformes i just put the command line for installing it) and after it when launching webui.bat, it show back errors x)

    • @controlaltai
      @controlaltai  10 місяців тому

      What errors are you getting? You don't have to get the latest version, only the one compatible with the cuda and pytorch version required for tensor rt.

    • @daemoniax3788
      @daemoniax3788 10 місяців тому

      and if i generate using dreamshaperXL with a refiner, it took from 2s/it to 6s/it on my 4060 laptop gpu, using latest nvidia studio driver

    • @daemoniax3788
      @daemoniax3788 10 місяців тому

      oh waw let met correct myself, it's 30s/it now ahahaha, with error running proccess_batch

    • @daemoniax3788
      @daemoniax3788 10 місяців тому

      @@controlaltai the same as 7:25 , the procedure entry point could not be located in the dynamic link library

    • @controlaltai
      @controlaltai  10 місяців тому

      You can proceed without xformers. Xformers does not cause that error and is not mandatory for tensor rt to work, just recommended.

  • @xyy2759
    @xyy2759 Рік тому +1

    Your explanations are clear. In my case, installing tensorrt==9.0.1.post11.dev4 and xformers results in an automatic reinstallation of torch 2.1.2. And I lose compatibility with torchvision. It's not blocking I decided to install tensorrt==9.0.1.post12.dev4 and xformers 0.0.23. I also reinstalled torchvision==0.16.1+cu121. I have a question, where did you find the file that we see in the control area SD_unet=[TRT] sd_xl_base_1.0. THANKS

    • @controlaltai
      @controlaltai  Рік тому +1

      Hi, Thank you! That file was generated using the settings shown in the Video via Tensor RT export. It is an RT engine file for UNet. I have made all profile and direct files available for Channel Members via Community Post.

    • @xyy2759
      @xyy2759 Рік тому

      It's gone bad.
      Building engine: 100%|##########| 6/6 [00:03

    • @controlaltai
      @controlaltai  Рік тому

      Tell me what settings did you use? Also try this close everything, Re-Open and don't do any image generation directly go to Tensort RT for generating engine. The Reason for this error, from what I have tested, is basically
      1. height and width have separate values.
      2. Optimal height and width is other than 512/1024.
      3. Memory Error during compilation (if red colored remarks are coming.
      4. The Select checkpoint and the VAE must be correct. When in Tensor RT the current select Checkpoint and VAE are linked. The Checkpoint and VAE are not matching, or have not matching resolutions, like SDXL checkpoint wrong resolution in training.
      At this point, just try again. First try with a simple 1024x1024 or 512x512, then go variable. Variable, like I did in the video requires more system ram along with VRAM (24GB VRAM not enough).
      Hope this helps, let me know how it goes.

    • @xyy2759
      @xyy2759 Рік тому +1

      ​@@controlaltai Hi, it was going to work but my drive ran out of available space 😅 Since, i reinstalled SD and venv from the beginning. Here's my setting:
      SD AUTOMATIC1111 version: v1.7.0-RC-5-gf92d6149
       •  python: 3.10.11
       •  torch: 2.1.1+cu121
       •  xformers: 0.0.23
       •  gradio: 3.41.2
       •  checkpoint: 31e35c80fc
       •  Stable-Diffusion-WebUI-TensorRT main 4c2bcafd
      TensorRT Exporter
      768x768 - 1024x1024 | Batch Size 1-4 (Dynamic)
      Advanced Settings
      Min batch-size
      1
      Optimal batch-size
      4
      Max batch-size
      4
      Min height
      896
      Optimal height
      1024
      Max height
      1536
      Right, I will follow your 4 points". Thanks !

    • @xyy2759
      @xyy2759 Рік тому

      @@controlaltai Something's wrong when Exporting Engine. The first line said FP16 has been disabled. And when Building engine, FP16 is missing just remains the Flags [REFIT, TF32]

  • @gamalfarag
    @gamalfarag Рік тому

    will this work with new cards only, or it can work with p40 tesla ?

    • @controlaltai
      @controlaltai  Рік тому

      Technically, it should leverage tensor cores. However, its nVidia. So here is their official article: nvidia.custhelp.com/app/answers/detail/a_id/5487/~/tensorrt-extension-for-stable-diffusion-web-ui
      They clearly write "leveraging the Tensor Cores in NVIDIA RTX GPUs". I suggest you try. If it doesn't work means they are blocking it on purpose.

    • @Morning730-xm6vh
      @Morning730-xm6vh 11 місяців тому

      on aws g5.xlarge A10 GPU, almost no improvement. Have you encountered the same problem?@@controlaltai

    • @controlaltai
      @controlaltai  11 місяців тому

      What were the training specs used for the engine?

  • @andrejlopuchov7972
    @andrejlopuchov7972 11 місяців тому +2

    To bad it does not work video video generations just yet

  • @blender_wiki
    @blender_wiki Рік тому

    Tensor RT extension is Available since 2 month

  • @Heldn100
    @Heldn100 9 місяців тому

    i cant create one ~ i get this
    (polygraphy.exception.exception.PolygraphyException: Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model?)

    • @controlaltai
      @controlaltai  9 місяців тому +1

      Well you have to generate a onnx model first successfully then it trains the engine file.

    • @Heldn100
      @Heldn100 9 місяців тому

      @@controlaltai how? Where?

    • @controlaltai
      @controlaltai  9 місяців тому +1

      Check the video, when you run it the first time, it gives an error message but in the command prompt it is generating the onnx file, it takes time let it run.

  • @cyril1111
    @cyril1111 Рік тому +1

    WOW, you gain 15s ! lol not a big change tbh - that's why IO havent dable with it yet

    • @xyy2759
      @xyy2759 11 місяців тому +4

      Yes, 15 crazy seconds 😁
      In my case:
      [512x512 upscaled 768x768] NO TRT - Time taken: 7 min. 0.7 sec.
      [512x512 upscaled 768x768] TRT ON - Time taken: 2 min. 22.9 sec.

    • @cyril1111
      @cyril1111 11 місяців тому

      @@xyy2759 well, thats a huge difference. but im a 4090, doenst make a huge one for me :)

  • @phenix5609
    @phenix5609 9 місяців тому

    i was following the tutorial everything was great until, when i installed xformer as i followed the video and the end of installing i notice, that he installed xformer but also upgraded py torch to 2.2.1 and uninstalled the previous 2.1.2 and said the 2.2.1 was not compatible, i then try the command u put a the bottom seeing it talked about torch2.1.2, it reinstalled the correct torch it seem, but then when i take the video back after modifying the webui-user.bat to include the -xformer tag in the command prompt windows , got this error : " Traceback (most recent call last):
    File "C:\Stable2\webui\launch.py", line 48, in
    main()
    File "C:\Stable2\webui\launch.py", line 39, in main
    prepare_environment()
    File "C:\Stable2\webui\modules\launch_utils.py", line 386, in prepare_environment
    raise RuntimeError(
    RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check, i'm not sure what to do now, i'll maybe retry following the video ...
    i'm using Forge A1111, might be the case for those error ? don't need or want to use some extension with it ? i'm pretty new at this started not a week ago
    i manage to correct giving it old for lib for torch vision audio and xformer, i found in an other video, and once it launch he detect the old lib in the terminal and give me command for more up to date lib than the one i had to put.

    • @controlaltai
      @controlaltai  9 місяців тому

      Hi, I have not tested this against Forge A1111. This was done on the standard A1111 v1.6 i believe. The Xformers versions and pytorch versions have changed by now. If it works for you try without Xformers. It should work with marginal difference. If you get it to work without xformers, maybe then we can know that its a xformers compatibility issue.

  • @relexelumna5360
    @relexelumna5360 8 місяців тому

    But u already have 4090 n makes no difference. I think RT Tensor is more valuable for rtx 4070 n below who struggle with speed

    • @controlaltai
      @controlaltai  8 місяців тому

      I know and agree. This has widespread business application where every second can save GPU rent time. It also makes a difference when doing a lot of images. But yeah for 4070 the time saved is way higher than a 4090.

    • @relexelumna5360
      @relexelumna5360 8 місяців тому

      @@controlaltaiThanks for replying. Yesterday i installed for my Stable diffusion and tested on my 4070 and its faster. But then i deleted whole stable diffusion n start from scatch without RT Tensors. Because every Model checkpoint and Lora need to be RT Tensor made and some errors at start up which give me anxiety. But i think they will fix this. I'll be back when they fix this. I'm happy to be in RTX family. RT Tensor is a game changing for Stable Diffusion. I hope they will become stable soon!

  • @vruser8430
    @vruser8430 Рік тому

    here we have some different packages .. mine are newer ... Requirement already satisfied: filelock in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (3.13.1)
    Requirement already satisfied: typing-extensions in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (4.9.0)
    Requirement already satisfied: sympy in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (1.12)
    Requirement already satisfied: networkx in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (3.2.1)
    Requirement already satisfied: jinja2 in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (3.1.2)
    Requirement already satisfied: fsspec in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (2023.12.2)
    Requirement already satisfied: MarkupSafe>=2.0 in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from jinja2->torch==2.1.1->xformers) (2.1.3)
    Requirement already satisfied: mpmath>=0.19 in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from sympy->torch==2.1.1->xformers) (1.3.0)
    Installing collected packages: torch, xformers

  • @mohamadelsawi
    @mohamadelsawi 5 місяців тому

    Thanks for your simple explaining :)
    i saw in ua-cam.com/video/ssPhVOgd1Qc/v-deo.html
    in flag section FP 16 , i searched alot to try to active it but i couldnt , i dont know how to do it , can you just help me with that if it is possible
    thanks again ^^

  • @Brunobarros-ph3fw
    @Brunobarros-ph3fw 7 місяців тому

    site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: '[WinError 127]