the error comes with minute 13:16 fixing xformer ... this is the message: Installing collected packages: torch, xformers Attempting uninstall: torch Found existing installation: torch 2.0.1+cu118 Uninstalling torch-2.0.1+cu118: Successfully uninstalled torch-2.0.1+cu118 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.1.1+cu121 which is incompatible. Successfully installed torch-2.1.1+cu121 xformers-0.0.23 so we stuck..
They released a new update 2 or 3 weeks ago. Maybe it’s something to do with the update, or maybe there was an update to 1111. Try ensuring both are updated. I personally haven’t checked on it.
How do i go about solving this after installing TensorRT the second time around. To create a public link, set `share=True` in `launch()`. Creating model from config: C:\AI\sd.webui\webui\configs\v1-inference.yaml Startup time: 7.5s (prepare environment: 1.6s, import torch: 2.7s, import gradio: 0.7s, setup paths: 0.7s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 0.6s, create ui: 0.4s, gradio launch: 0.3s). Applying attention optimization: Doggettx... done. Model loaded in 2.5s (load weights from disk: 0.5s, create model: 0.2s, apply weights to model: 1.4s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.1s). *** Error running install.py for extension C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT. *** Command: "C:\AI\sd.webui\webui\venv\Scripts\python.exe" "C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py" *** Error code: 1 *** stderr: Traceback (most recent call last): *** File "C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py", line 3, in *** from importlib_metadata import version *** ModuleNotFoundError: No module named 'importlib_metadata'
When installing the second time, are you removing the physical drive folder? Delete the physical drive folders from both places along with the extension from webui as shown in the video and try again. Let me know how it goes.
@@controlaltai Im running into the same error. You mean if we deleted the venv and the extension folder before recreating them all? Yes I definitely did. Sadly the Error still occurs. Man its always such a pain to get this running, tensorrt that is.
things went smothly but when i tried to install xformers(i have it already before but i wanted to make sure to have the latest one so i didn't unistall xformes i just put the command line for installing it) and after it when launching webui.bat, it show back errors x)
What errors are you getting? You don't have to get the latest version, only the one compatible with the cuda and pytorch version required for tensor rt.
Your explanations are clear. In my case, installing tensorrt==9.0.1.post11.dev4 and xformers results in an automatic reinstallation of torch 2.1.2. And I lose compatibility with torchvision. It's not blocking I decided to install tensorrt==9.0.1.post12.dev4 and xformers 0.0.23. I also reinstalled torchvision==0.16.1+cu121. I have a question, where did you find the file that we see in the control area SD_unet=[TRT] sd_xl_base_1.0. THANKS
Hi, Thank you! That file was generated using the settings shown in the Video via Tensor RT export. It is an RT engine file for UNet. I have made all profile and direct files available for Channel Members via Community Post.
Tell me what settings did you use? Also try this close everything, Re-Open and don't do any image generation directly go to Tensort RT for generating engine. The Reason for this error, from what I have tested, is basically 1. height and width have separate values. 2. Optimal height and width is other than 512/1024. 3. Memory Error during compilation (if red colored remarks are coming. 4. The Select checkpoint and the VAE must be correct. When in Tensor RT the current select Checkpoint and VAE are linked. The Checkpoint and VAE are not matching, or have not matching resolutions, like SDXL checkpoint wrong resolution in training. At this point, just try again. First try with a simple 1024x1024 or 512x512, then go variable. Variable, like I did in the video requires more system ram along with VRAM (24GB VRAM not enough). Hope this helps, let me know how it goes.
@@controlaltai Hi, it was going to work but my drive ran out of available space 😅 Since, i reinstalled SD and venv from the beginning. Here's my setting: SD AUTOMATIC1111 version: v1.7.0-RC-5-gf92d6149 • python: 3.10.11 • torch: 2.1.1+cu121 • xformers: 0.0.23 • gradio: 3.41.2 • checkpoint: 31e35c80fc • Stable-Diffusion-WebUI-TensorRT main 4c2bcafd TensorRT Exporter 768x768 - 1024x1024 | Batch Size 1-4 (Dynamic) Advanced Settings Min batch-size 1 Optimal batch-size 4 Max batch-size 4 Min height 896 Optimal height 1024 Max height 1536 Right, I will follow your 4 points". Thanks !
@@controlaltai Something's wrong when Exporting Engine. The first line said FP16 has been disabled. And when Building engine, FP16 is missing just remains the Flags [REFIT, TF32]
Technically, it should leverage tensor cores. However, its nVidia. So here is their official article: nvidia.custhelp.com/app/answers/detail/a_id/5487/~/tensorrt-extension-for-stable-diffusion-web-ui They clearly write "leveraging the Tensor Cores in NVIDIA RTX GPUs". I suggest you try. If it doesn't work means they are blocking it on purpose.
i cant create one ~ i get this (polygraphy.exception.exception.PolygraphyException: Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model?)
Check the video, when you run it the first time, it gives an error message but in the command prompt it is generating the onnx file, it takes time let it run.
Yes, 15 crazy seconds 😁 In my case: [512x512 upscaled 768x768] NO TRT - Time taken: 7 min. 0.7 sec. [512x512 upscaled 768x768] TRT ON - Time taken: 2 min. 22.9 sec.
i was following the tutorial everything was great until, when i installed xformer as i followed the video and the end of installing i notice, that he installed xformer but also upgraded py torch to 2.2.1 and uninstalled the previous 2.1.2 and said the 2.2.1 was not compatible, i then try the command u put a the bottom seeing it talked about torch2.1.2, it reinstalled the correct torch it seem, but then when i take the video back after modifying the webui-user.bat to include the -xformer tag in the command prompt windows , got this error : " Traceback (most recent call last): File "C:\Stable2\webui\launch.py", line 48, in main() File "C:\Stable2\webui\launch.py", line 39, in main prepare_environment() File "C:\Stable2\webui\modules\launch_utils.py", line 386, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check, i'm not sure what to do now, i'll maybe retry following the video ... i'm using Forge A1111, might be the case for those error ? don't need or want to use some extension with it ? i'm pretty new at this started not a week ago i manage to correct giving it old for lib for torch vision audio and xformer, i found in an other video, and once it launch he detect the old lib in the terminal and give me command for more up to date lib than the one i had to put.
Hi, I have not tested this against Forge A1111. This was done on the standard A1111 v1.6 i believe. The Xformers versions and pytorch versions have changed by now. If it works for you try without Xformers. It should work with marginal difference. If you get it to work without xformers, maybe then we can know that its a xformers compatibility issue.
I know and agree. This has widespread business application where every second can save GPU rent time. It also makes a difference when doing a lot of images. But yeah for 4070 the time saved is way higher than a 4090.
@@controlaltaiThanks for replying. Yesterday i installed for my Stable diffusion and tested on my 4070 and its faster. But then i deleted whole stable diffusion n start from scatch without RT Tensors. Because every Model checkpoint and Lora need to be RT Tensor made and some errors at start up which give me anxiety. But i think they will fix this. I'll be back when they fix this. I'm happy to be in RTX family. RT Tensor is a game changing for Stable Diffusion. I hope they will become stable soon!
Thanks for your simple explaining :) i saw in ua-cam.com/video/ssPhVOgd1Qc/v-deo.html in flag section FP 16 , i searched alot to try to active it but i couldnt , i dont know how to do it , can you just help me with that if it is possible thanks again ^^
Excellent tutorial thanks a lot
It's installing Optimum on mine, and it's stuck for awhile now. What should I do?
Hi! How do you ignore the warning (DEPRECATION) you show at 10:11 minutes?
What keys do you press?
You literally have to do nothing and let it proceed on its own.
the error comes with minute 13:16 fixing xformer ... this is the message: Installing collected packages: torch, xformers
Attempting uninstall: torch
Found existing installation: torch 2.0.1+cu118
Uninstalling torch-2.0.1+cu118:
Successfully uninstalled torch-2.0.1+cu118
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.1.1+cu121 which is incompatible.
Successfully installed torch-2.1.1+cu121 xformers-0.0.23
so we stuck..
I had this same problem and you solve it with this explanation ua-cam.com/video/ubkOCjL0UB8/v-deo.html
My TensorRT Tab is now missing, don't know whatsup with that....
They released a new update 2 or 3 weeks ago. Maybe it’s something to do with the update, or maybe there was an update to 1111. Try ensuring both are updated. I personally haven’t checked on it.
Thanks a lot 🎉
Always welcome! And thank you.
How do i go about solving this after installing TensorRT the second time around.
To create a public link, set `share=True` in `launch()`.
Creating model from config: C:\AI\sd.webui\webui\configs\v1-inference.yaml
Startup time: 7.5s (prepare environment: 1.6s, import torch: 2.7s, import gradio: 0.7s, setup paths: 0.7s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 0.6s, create ui: 0.4s, gradio launch: 0.3s).
Applying attention optimization: Doggettx... done.
Model loaded in 2.5s (load weights from disk: 0.5s, create model: 0.2s, apply weights to model: 1.4s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.1s).
*** Error running install.py for extension C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT.
*** Command: "C:\AI\sd.webui\webui\venv\Scripts\python.exe" "C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py"
*** Error code: 1
*** stderr: Traceback (most recent call last):
*** File "C:\AI\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py", line 3, in
*** from importlib_metadata import version
*** ModuleNotFoundError: No module named 'importlib_metadata'
When installing the second time, are you removing the physical drive folder? Delete the physical drive folders from both places along with the extension from webui as shown in the video and try again. Let me know how it goes.
@@controlaltai Im running into the same error. You mean if we deleted the venv and the extension folder before recreating them all? Yes I definitely did. Sadly the Error still occurs. Man its always such a pain to get this running, tensorrt that is.
Yes venv and extension folder. That has to be deleted for a re install.
@@controlaltai Yeah did that found a few other possible solution - non worked so I kinda gave up.
after doing the last step it went back to the original error boo...
The xformers? RT tensor had a recent update.
me too. Have u resolved?
Skip the xformers......step and try.
Thank tou!
things went smothly but when i tried to install xformers(i have it already before but i wanted to make sure to have the latest one so i didn't unistall xformes i just put the command line for installing it) and after it when launching webui.bat, it show back errors x)
What errors are you getting? You don't have to get the latest version, only the one compatible with the cuda and pytorch version required for tensor rt.
and if i generate using dreamshaperXL with a refiner, it took from 2s/it to 6s/it on my 4060 laptop gpu, using latest nvidia studio driver
oh waw let met correct myself, it's 30s/it now ahahaha, with error running proccess_batch
@@controlaltai the same as 7:25 , the procedure entry point could not be located in the dynamic link library
You can proceed without xformers. Xformers does not cause that error and is not mandatory for tensor rt to work, just recommended.
Your explanations are clear. In my case, installing tensorrt==9.0.1.post11.dev4 and xformers results in an automatic reinstallation of torch 2.1.2. And I lose compatibility with torchvision. It's not blocking I decided to install tensorrt==9.0.1.post12.dev4 and xformers 0.0.23. I also reinstalled torchvision==0.16.1+cu121. I have a question, where did you find the file that we see in the control area SD_unet=[TRT] sd_xl_base_1.0. THANKS
Hi, Thank you! That file was generated using the settings shown in the Video via Tensor RT export. It is an RT engine file for UNet. I have made all profile and direct files available for Channel Members via Community Post.
It's gone bad.
Building engine: 100%|##########| 6/6 [00:03
Tell me what settings did you use? Also try this close everything, Re-Open and don't do any image generation directly go to Tensort RT for generating engine. The Reason for this error, from what I have tested, is basically
1. height and width have separate values.
2. Optimal height and width is other than 512/1024.
3. Memory Error during compilation (if red colored remarks are coming.
4. The Select checkpoint and the VAE must be correct. When in Tensor RT the current select Checkpoint and VAE are linked. The Checkpoint and VAE are not matching, or have not matching resolutions, like SDXL checkpoint wrong resolution in training.
At this point, just try again. First try with a simple 1024x1024 or 512x512, then go variable. Variable, like I did in the video requires more system ram along with VRAM (24GB VRAM not enough).
Hope this helps, let me know how it goes.
@@controlaltai Hi, it was going to work but my drive ran out of available space 😅 Since, i reinstalled SD and venv from the beginning. Here's my setting:
SD AUTOMATIC1111 version: v1.7.0-RC-5-gf92d6149
• python: 3.10.11
• torch: 2.1.1+cu121
• xformers: 0.0.23
• gradio: 3.41.2
• checkpoint: 31e35c80fc
• Stable-Diffusion-WebUI-TensorRT main 4c2bcafd
TensorRT Exporter
768x768 - 1024x1024 | Batch Size 1-4 (Dynamic)
Advanced Settings
Min batch-size
1
Optimal batch-size
4
Max batch-size
4
Min height
896
Optimal height
1024
Max height
1536
Right, I will follow your 4 points". Thanks !
@@controlaltai Something's wrong when Exporting Engine. The first line said FP16 has been disabled. And when Building engine, FP16 is missing just remains the Flags [REFIT, TF32]
will this work with new cards only, or it can work with p40 tesla ?
Technically, it should leverage tensor cores. However, its nVidia. So here is their official article: nvidia.custhelp.com/app/answers/detail/a_id/5487/~/tensorrt-extension-for-stable-diffusion-web-ui
They clearly write "leveraging the Tensor Cores in NVIDIA RTX GPUs". I suggest you try. If it doesn't work means they are blocking it on purpose.
on aws g5.xlarge A10 GPU, almost no improvement. Have you encountered the same problem?@@controlaltai
What were the training specs used for the engine?
To bad it does not work video video generations just yet
Tensor RT extension is Available since 2 month
i cant create one ~ i get this
(polygraphy.exception.exception.PolygraphyException: Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model?)
Well you have to generate a onnx model first successfully then it trains the engine file.
@@controlaltai how? Where?
Check the video, when you run it the first time, it gives an error message but in the command prompt it is generating the onnx file, it takes time let it run.
WOW, you gain 15s ! lol not a big change tbh - that's why IO havent dable with it yet
Yes, 15 crazy seconds 😁
In my case:
[512x512 upscaled 768x768] NO TRT - Time taken: 7 min. 0.7 sec.
[512x512 upscaled 768x768] TRT ON - Time taken: 2 min. 22.9 sec.
@@xyy2759 well, thats a huge difference. but im a 4090, doenst make a huge one for me :)
i was following the tutorial everything was great until, when i installed xformer as i followed the video and the end of installing i notice, that he installed xformer but also upgraded py torch to 2.2.1 and uninstalled the previous 2.1.2 and said the 2.2.1 was not compatible, i then try the command u put a the bottom seeing it talked about torch2.1.2, it reinstalled the correct torch it seem, but then when i take the video back after modifying the webui-user.bat to include the -xformer tag in the command prompt windows , got this error : " Traceback (most recent call last):
File "C:\Stable2\webui\launch.py", line 48, in
main()
File "C:\Stable2\webui\launch.py", line 39, in main
prepare_environment()
File "C:\Stable2\webui\modules\launch_utils.py", line 386, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check, i'm not sure what to do now, i'll maybe retry following the video ...
i'm using Forge A1111, might be the case for those error ? don't need or want to use some extension with it ? i'm pretty new at this started not a week ago
i manage to correct giving it old for lib for torch vision audio and xformer, i found in an other video, and once it launch he detect the old lib in the terminal and give me command for more up to date lib than the one i had to put.
Hi, I have not tested this against Forge A1111. This was done on the standard A1111 v1.6 i believe. The Xformers versions and pytorch versions have changed by now. If it works for you try without Xformers. It should work with marginal difference. If you get it to work without xformers, maybe then we can know that its a xformers compatibility issue.
But u already have 4090 n makes no difference. I think RT Tensor is more valuable for rtx 4070 n below who struggle with speed
I know and agree. This has widespread business application where every second can save GPU rent time. It also makes a difference when doing a lot of images. But yeah for 4070 the time saved is way higher than a 4090.
@@controlaltaiThanks for replying. Yesterday i installed for my Stable diffusion and tested on my 4070 and its faster. But then i deleted whole stable diffusion n start from scatch without RT Tensors. Because every Model checkpoint and Lora need to be RT Tensor made and some errors at start up which give me anxiety. But i think they will fix this. I'll be back when they fix this. I'm happy to be in RTX family. RT Tensor is a game changing for Stable Diffusion. I hope they will become stable soon!
here we have some different packages .. mine are newer ... Requirement already satisfied: filelock in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (3.13.1)
Requirement already satisfied: typing-extensions in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (4.9.0)
Requirement already satisfied: sympy in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (1.12)
Requirement already satisfied: networkx in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (3.2.1)
Requirement already satisfied: jinja2 in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (3.1.2)
Requirement already satisfied: fsspec in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from torch==2.1.1->xformers) (2023.12.2)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from jinja2->torch==2.1.1->xformers) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in c:\users\admin\sd.webui\webui\venv\lib\site-packages (from sympy->torch==2.1.1->xformers) (1.3.0)
Installing collected packages: torch, xformers
Thanks for your simple explaining :)
i saw in ua-cam.com/video/ssPhVOgd1Qc/v-deo.html
in flag section FP 16 , i searched alot to try to active it but i couldnt , i dont know how to do it , can you just help me with that if it is possible
thanks again ^^
What is your error?
site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: '[WinError 127]