Thanks a million for this video, was going crazy trying to get my GPU on Windows to work with tf and spent many hours, this worked! Working with a combination of Tensorflow 2.17.0, cuDNN 8.9, CUDA 12.3 and TensorRT 10.6 - just needed to change some of the naming!
@@TechJotters24 @arandomguy3526 1 ay önce Hi, is this error normal "E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered. E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
Finally a video that I required. I tried for last 10 months but did not got any appropriate solutions. Finally your video helped me for proper setup of WSL with GPU access.
@@victoroliveroing Yes, it did. Only slight issue was a small mistake where he downloaded a Windows download rather than Linux. However, he walked back the error very soon after. Did you encounter any error?
A very comprehensive and thorough guide. It definitely shows how much research you have done. Thank you for this amazing video! Also tensorflow is stopping support for TensorRT in the next update so if you are installing a newer version don't bother installing TensorRT To the people who are facing issues when installing a different version of CUDA or cuDNN please make sure that in whichever command you run to make sure the version number is of the version of your installation . For example I installed version 12.3 of CUDA so in commands like: sudo cp include/cudnn*.h /usr/local/cuda-12.1/include I had to change it to: sudo cp include/cudnn*.h /usr/local/cuda-12.3/include Thoroughly check each command for version number or else you will face issues. Just follow the steps mentioned in the video and you should be fine.
This is legendary. Thanks a lot for putting all this together. I am so grateful. This has been troubling me for months. This thing has woken me up till 2 am. I am glad everything came up just perfectly. Thank you once again.
Man, I don't know how it's possible but it's working. You did such a great tutorial for a noob as me that started 3 days ago learning Deep Learning. And it worked at the first try! Great job
@@TechJotters24 I tried to run the same instructions but after installing tensorflow @ [python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"] , I got: 2024-07-25 22:47:21.340568: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2024-07-25 22:47:21.616111: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-07-25 22:47:21.706543: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-07-25 22:47:21.727644: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-07-25 22:47:21.870530: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. Would you happen to know what happened?
@@victoroliveroing I'm not entirely sure. But I believe back then there was a version conflict with TF2.17, Python3.12.7 and TFRT. Maybe there was another reason but I can't remember. If you can figure it out with tfrt, go for it. But since it's optional I decided against it.
Lol bro, I've been trying to find a solution to this problem for almost a week, I've read many gigabytes of information and visited every possible page on Nvidia's website, and somehow I can't solve this problem, you're my superman, thank you very much)❤❤❤
Thanks a lot for your effort to explain how to finally succeed in installing TF with GPU using WSL Shall we have to redo all the process again in 6 months with new versions of all those components ? Gee !😁 Kind regards
Awesome video. Thanks a lot. Till now I was using Ubuntu and windows separately using dual boot and has tf gpu install at both place. Both has its benefit. With windows it allowed me to open large models because some how in windows my gpu was able access my system ram too. But Ubuntu supported many features of torch which doesn’t work on windows. No I am getting a new laptop and wanted to try this wsl. I want to know from your experience is your gpu able to access system memory?
Thanks a lot man, this is the best video. I was trying to install for more than a day and couldn't. U r a saviour 🙏. One thing i want to ask is ur GitHub link u provided has some extra lines of code to execute at the end like sudo rm and few other lines that you don't talk about in the video, should I run them?
**THIS VIDEO CAN HELP YOU TO INSTALL ANY VERSION OF CUDA, CUDNN, TENSORRT, AND TENSORFLOW IN YOUR WSL2** Thank you again for creating an updated video on this topic. feedback: kindly fix the chapter's timing it is not in sync.
Thanks to your good instruction. May I ask a question. I met some messages "E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered" when I run "python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"" I followed your way to set the path or lib's path and I check it by echo. and ./test_cudnn returns "cuDNN successfully initialized." How can I fix it? May be my tensorflow version 2.17.0 without version option during install, I tried to downgrade using this command "pip install --ignore-installed --upgrade tensorflow==2.15.0" also
Thanks a lot. But I don't understand something important ! We install many things on base with miniconda which will be isolated of TF. TF doesn"t have in his environement all the element of base. Why you don't make a clone or can you explain to me what I don't understand in your list of command. Thanks a lot.
Hello, very useful video. Thank you. I want to run Spyder, can you help me with this? Also I get a segmentation fault error when running the model, do you have any idea about this?
Can I use this method on the actual Ubuntu 24.04 instead of the WSL? Also, may I know how can I link the installed tensorflow to the jupyter notebook in VS code instead of the actual jupyter notebook?
If I install the latest version of tensorflow without tensorrt, would I later be able to use the downgraded versions of cuda and tensorflow for tensorrt? Also, the latest version of tensorflow uses cuda 12.3 while pytorch is 12.1 or 12.4, so can I have multiple cuda versions? I'm unfamiliar with these technologies because I'm just starting out.
Hi, register your kernel with ipykernel. it should appear to vscode. check this video, i showed the process with wsl. it should work with other options also. ua-cam.com/video/Opi8zwJZ_8I/v-deo.html
Thanks for this valuable video, works for me!!! I only had an issue with using Matplotlib and got this error: ValueError: object __array__ method not producing an array. I dont know if the error is for CUDA Versions.
Thank you so much for the video. I have a question--I have followed your guide exactly, but I get one extra warning message when I import tensorflow in jupyter: oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Is this a cause for concern?
Hi, it's generally not a cause for concern. You might see slightly different numerical results due to floating-point round-off errors from different computation orders. This is typical in many high-performance computing scenarios and usually not significant. Temporarily within Jupyter Notebook: import os os.environ['TF_ENABLE_ONEDNN_OPTS'] = '0' import tensorflow as tf Permanently in Your Environment: Add the below to .bashrc and after adding it, run source ~/.bashrc export TF_ENABLE_ONEDNN_OPTS=0
Having a little trouble installing cudnn. I used the same cudnn tar and followed the commands exactly, but cudnn initialization keeps on failing :/ Do you have any idea what could be going wrong?
@@TechJotters24 Running `import torch` and `torch.cuda.is_available()` creates: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) Fixed it by removing one of three GPUs. Not sure why cudnn & the python commands above failed with 3 GPUs but runs with 2 GPUs. Could be a problem with Windows or WSL2 or a hardware limitation, since I've seen some Linux servers with 4+ GPUs Would love to hear if you have any thoughts as to why :)
@@TechJotters24 Running `import torch` and `torch.cuda.is_available()` creates the following error: "UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory" Fixed the error by removing 1 of 3 GPUs. Curious why cudnn fails with 3 GPUs but works with 2 GPUs. Could be a hardware limitation or an issue with WSL2/Windows. I've noticed that my model auto-loader (exl2 running on Win11) can never utilize all 3 GPUs simultaneously for large models, while I've seen Linux servers handle 4+ GPUs before. Wondering if you have any idea what the root cause is! I'd love to figure it out 😃
@@TechJotters24 Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader.
@@TechJotters24 Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader.
Hello mister, I want to run some AI programs like stable diffusion and LLM locally, but my GPU is very low i have Nvidia Geforce GTX 1650 with only 4 gb of ram Can this nvidia drivers thing installed with WSL help me to run this kind of AI software? into a windows 11? or what is the real benefit to implement this? thanks a lot for your help
Hi, I don’t think you need to do wsl. Try lm studio on windows. Smaller models will run perfectly and it’ll also suggest you which models are compatible with your system.
The latest cuda toolkit you can use for tensorflow 2.16.1 is 12.3. But in that case you can’t use TensorRt. If you want tensorrt support, the latest cuda toolkit you can use it 12.1 because tensorflow support only tensorrt 8.6 and tensorrt 8.6 can only work with Cuda Toolkit 12.1. It’s a chain reaction :D
@@TechJotters24 sir i follow the procedure with tf 2.16.1 with cude 12.1 unfortunately it isn’t identify gpu device. I switch to tf 2.15 with new env. It identify the device with many missing dependencies like cublas etc
@@TechJotters24 sir it works I don’t know the reason i just restart my system and make new conda env and just install tf and pytorch thank you soo much sir 😊
hello, i followed every procedures of yours, but im installing cuda 11.8 and cudnn 8.6, coz based on tensorflow's website, they should be compatible with my rtx3060. however, i reached an issue in initializing cudnn. after running "./test_cudnn", this error pops out, when i use "ls" i can see the file, but coudnlt open, do you know how to solve this? ./test_cudnn: error while loading shared libraries: libcudnn.so.8: cannot open shared object file: No such file or directory thank you sir
@@TechJotters24 no problem, although that thing didnt work out, but eventually i got it done. However, as gpu was able to be used, i couldnt import any other python libraries like cv2 or mediapipe (pip installed, pip list also shows). So i guess i just decided to give up lol. But your videos are great, keep it up.
Hi, is this error normal "E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered. E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
Thanks a million for this video, was going crazy trying to get my GPU on Windows to work with tf and spent many hours, this worked! Working with a combination of Tensorflow 2.17.0, cuDNN 8.9, CUDA 12.3 and TensorRT 10.6 - just needed to change some of the naming!
In more than three weeks of watching tutorials and looking for amazing solutions, you were the only one who could help me. Thank you very much.
You are welcome!! Really happy it works for you!!!
@@TechJotters24 @arandomguy3526
1 ay önce
Hi, is this error normal "E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered.
E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
thanks a lot for the effort that you've put here . these worked for me tensorflow-2.18.0 , CUDA 12.5 , cuDNN v8.9.7 , TensorRT 10.7
This was absolute legend!
Thank you
Thank you!!! You can't image how much this video has helped me, literally wasted days on trying to install everything before I found you
I have been trying to do this for so long. Finally a video that showed me exactly how to do that and solved all my queries. Thank you so much!
You are welcome.
Thank you so much bro , I wasted hours and days trying to figure whats wrong. you deserve more support and subscribers.
You are welcome. Really happy to help!!!
You Are a LEGEND man!!! After all that searching you are the one who's done it for me. Keep Going. CHEERS!!!
Glad I could help
Works perfect as of 19/09/2024, amazing video!
Glad it helped!
Finally a video that I required. I tried for last 10 months but did not got any appropriate solutions. Finally your video helped me for proper setup of WSL with GPU access.
Maaan! You are
Works for you?
@@victoroliveroing Yes, it did. Only slight issue was a small mistake where he downloaded a Windows download rather than Linux. However, he walked back the error very soon after. Did you encounter any error?
Thanks a lot, I followed each steps and finally get everything done. The codes works very well!
A very comprehensive and thorough guide. It definitely shows how much research you have done. Thank you for this amazing video! Also tensorflow is stopping support for TensorRT in the next update so if you are installing a newer version don't bother installing TensorRT
To the people who are facing issues when installing a different version of CUDA or cuDNN please make sure that in whichever command you run to make sure the version number is of the version of your installation .
For example I installed version 12.3 of CUDA so in commands like:
sudo cp include/cudnn*.h /usr/local/cuda-12.1/include
I had to change it to:
sudo cp include/cudnn*.h /usr/local/cuda-12.3/include
Thoroughly check each command for version number or else you will face issues.
Just follow the steps mentioned in the video and you should be fine.
This is legendary. Thanks a lot for putting all this together. I am so grateful. This has been troubling me for months. This thing has woken me up till 2 am. I am glad everything came up just perfectly. Thank you once again.
Brilliant, Thank You! Been trying to do this for at least 6 month now and finally got it to work!!!😁
Bro, you just saved me. I can't describe how helpful this video was. Thanks for this. Hope God blesses you.
Bro you're GOATED. The only tutorial that worked for me after searching everywhere. Subscribed and Liked
Appreciate it
Man, I don't know how it's possible but it's working. You did such a great tutorial for a noob as me that started 3 days ago learning Deep Learning. And it worked at the first try! Great job
Thank you so much! The only tutorial that has worked so far end-to-end. Thanks again!
You are welcome.
@@TechJotters24 I tried to run the same instructions but after installing tensorflow @ [python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"] , I got:
2024-07-25 22:47:21.340568: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-07-25 22:47:21.616111: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-07-25 22:47:21.706543: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-07-25 22:47:21.727644: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-07-25 22:47:21.870530: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
Would you happen to know what happened?
I was able to fix it with this code:
python3 -m pip uninstall tensorflow[and-cuda]
python3 -m pip install "tensorflow[and-cuda]==2.16.1"
Thank you so much. Please keep uploading. Learned a lot from you. Respect.
Thank you, I will
Very detailed video, Thumbs up for that. One suggestion though , plz zoom on to the specific point for better visibility
Noted
This helped me a looooot! thank you so much! now I can use all my GPU 🙌🏼
Great tutorial! Works smoothly with tf 12.7 (without tfrt)
Dont install TensorRT?
@@victoroliveroing I'm not entirely sure. But I believe back then there was a version conflict with TF2.17, Python3.12.7 and TFRT. Maybe there was another reason but I can't remember. If you can figure it out with tfrt, go for it. But since it's optional I decided against it.
@@markus9871 can i download tf not tensor rt ? i have to download tensorflow 12.18
You're the software wizard I needed, thank you very much
:D you are welcome
Thanks for the great tutorial! I have tried a few but this is the only one I got to work!! My MSc project thanks you!!!
Thank you so much for helping us in a detailed manner, thank you so much ! ❤
Thank you SO MUCH! All other resources are sooo behind that I wasted way too much time trying to get TensorRT to show up.
You are most welcome
wow, you have really helped me to set up my machine learning environment on my windows. Thank you!!
very helpful, thank you. Would be cool to see what kinds of projects you work to put machine learning to work.
Lol bro, I've been trying to find a solution to this problem for almost a week, I've read many gigabytes of information and visited every possible page on Nvidia's website, and somehow I can't solve this problem, you're my superman, thank you very much)❤❤❤
You are welcome!!!
Thanks a lot for your effort to explain how to finally succeed in installing TF with GPU using WSL
Shall we have to redo all the process again in 6 months with new versions of all those components ?
Gee !😁
Kind regards
Thank you very much man! Very helpful tutorial; made me save very much time!!! :-)
Glad to hear that!
Thank you my brother!
Very Useful video
You are welcome!!!
Thank you for preparing this guide, it was very helpful
You are welcome
Thank you!! Guide worked perfectly.
You are welcome!!!
thnx so much bro it's working on my first try
love u so much ! thanks for turtorial my sir
Awesome video. Thanks a lot. Till now I was using Ubuntu and windows separately using dual boot and has tf gpu install at both place. Both has its benefit. With windows it allowed me to open large models because some how in windows my gpu was able access my system ram too. But Ubuntu supported many features of torch which doesn’t work on windows. No I am getting a new laptop and wanted to try this wsl. I want to know from your experience is your gpu able to access system memory?
Thank you!! It works perfectly
You are the new Buddha ...because you are the only one who shows the "right way" !!
Thank you!!!!
true legend to solve the problem
Thank you!!
18:07 any reason why you choose linux here over ubuntu? (your system is ubuntu right)
Thanks, Man. This really helps.
wasn't working for me because tensorflow[and-cuda] is now 2.18. I try pip install tensorflow[and-cuda]==2.16.1 and it works perfectly
Thank you so much. It works, really save the time. GOD BLESS YOU.
You are welcome!!!
Thanks a lot man, this is the best video. I was trying to install for more than a day and couldn't. U r a saviour 🙏.
One thing i want to ask is ur GitHub link u provided has some extra lines of code to execute at the end like sudo rm and few other lines that you don't talk about in the video, should I run them?
Glad I could help
Great tutorial! Thank you so much.
You're very welcome!
**THIS VIDEO CAN HELP YOU TO INSTALL ANY VERSION OF CUDA, CUDNN, TENSORRT, AND TENSORFLOW IN YOUR WSL2**
Thank you again for creating an updated video on this topic.
feedback: kindly fix the chapter's timing it is not in sync.
Hi, I’ll check it
Amazing, thanks!
Thank you!. This worked!
Thank you so much. Great tutorial! 👍👍👍👍👍👍👍
You are welcome!!!
You are de Guy. Thankyou very much
Happy to help
you're the best!
Thank you !!!
Thank you!!!
Thank you a lot for this 🙏🙏
Thanks to your good instruction. May I ask a question. I met some messages "E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered" when I run "python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))""
I followed your way to set the path or lib's path and I check it by echo. and ./test_cudnn returns "cuDNN successfully initialized." How can I fix it? May be my tensorflow version 2.17.0 without version option during install, I tried to downgrade using this command "pip install --ignore-installed --upgrade tensorflow==2.15.0" also
Thanks a lot. But I don't understand something important ! We install many things on base with miniconda which will be isolated of TF. TF doesn"t have in his environement all the element of base. Why you don't make a clone or can you explain to me what I don't understand in your list of command. Thanks a lot.
on 17:11 i do get nvcc: command not found error but i could not solve it even with your tutorial :(
Hello, very useful video. Thank you. I want to run Spyder, can you help me with this? Also I get a segmentation fault error when running the model, do you have any idea about this?
Hi, sorry I don’t know about segmentation error. I’ll test spyder and let you know.
What is cuDNN, CUDA Toolkit, and TensorRT version if I want to install Tensorflow 2.15.1?
Can I use this method on the actual Ubuntu 24.04 instead of the WSL? Also, may I know how can I link the installed tensorflow to the jupyter notebook in VS code instead of the actual jupyter notebook?
Hi, I made one.
ua-cam.com/video/1Tr1ifuSh6o/v-deo.html
@@TechJotters24 You're a legend man. Appreciate it alot!
Thank you for this comment. Because of this, he made a new video. You are such a LEGEND
Thank you very much
If I install the latest version of tensorflow without tensorrt, would I later be able to use the downgraded versions of cuda and tensorflow for tensorrt? Also, the latest version of tensorflow uses cuda 12.3 while pytorch is 12.1 or 12.4, so can I have multiple cuda versions? I'm unfamiliar with these technologies because I'm just starting out.
may i ask a question, how to connect the environtment to vscode? like using ipynb on vscode
Hi, register your kernel with ipykernel. it should appear to vscode.
check this video, i showed the process with wsl. it should work with other options also.
ua-cam.com/video/Opi8zwJZ_8I/v-deo.html
Thanks for this valuable video, works for me!!! I only had an issue with using Matplotlib and got this error: ValueError: object __array__ method not producing an array. I dont know if the error is for CUDA Versions.
Thank you so so so much for this video. It works also for windows 10. By the way, do you know if is possible to use this with vscode?
Yes. You can use vscode.
Can you please help with the links ? That github link attached is asolutely confusion one
Thank you so much for the video. I have a question--I have followed your guide exactly, but I get one extra warning message when I import tensorflow in jupyter:
oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Is this a cause for concern?
Hi, it's generally not a cause for concern. You might see slightly different numerical results due to floating-point round-off errors from different computation orders. This is typical in many high-performance computing scenarios and usually not significant.
Temporarily within Jupyter Notebook:
import os
os.environ['TF_ENABLE_ONEDNN_OPTS'] = '0'
import tensorflow as tf
Permanently in Your Environment: Add the below to .bashrc and after adding it, run source ~/.bashrc
export TF_ENABLE_ONEDNN_OPTS=0
Thank u soooo much
Thank you soooo much, It's clear and sharp to the point
Can you help me how to use the tf environment for local windows folder in vs code
You are welcome. Sure I’ll help you.
Thank you
Having a little trouble installing cudnn. I used the same cudnn tar and followed the commands exactly, but cudnn initialization keeps on failing :/
Do you have any idea what could be going wrong?
What type of error you are getting?
@@TechJotters24 Running `import torch` and `torch.cuda.is_available()` creates: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
Fixed it by removing one of three GPUs. Not sure why cudnn & the python commands above failed with 3 GPUs but runs with 2 GPUs. Could be a problem with Windows or WSL2 or a hardware limitation, since I've seen some Linux servers with 4+ GPUs
Would love to hear if you have any thoughts as to why :)
@@TechJotters24 Running `import torch` and `torch.cuda.is_available()` creates the following error: "UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory"
Fixed the error by removing 1 of 3 GPUs. Curious why cudnn fails with 3 GPUs but works with 2 GPUs. Could be a hardware limitation or an issue with WSL2/Windows. I've noticed that my model auto-loader (exl2 running on Win11) can never utilize all 3 GPUs simultaneously for large models, while I've seen Linux servers handle 4+ GPUs before.
Wondering if you have any idea what the root cause is! I'd love to figure it out 😃
@@TechJotters24 Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader.
@@TechJotters24 Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader.
Hello mister, I want to run some AI programs like stable diffusion and LLM locally, but my GPU is very low i have
Nvidia Geforce GTX 1650 with only 4 gb of ram
Can this nvidia drivers thing installed with WSL help me to run this kind of AI software? into a windows 11? or what is the real benefit to implement this?
thanks a lot for your help
Hi, I don’t think you need to do wsl. Try lm studio on windows. Smaller models will run perfectly and it’ll also suggest you which models are compatible with your system.
Great job! Does Tensorflow 2.16.1 support Cuda 12.1, or does it require Cuda 12.3?
The latest cuda toolkit you can use for tensorflow 2.16.1 is 12.3. But in that case you can’t use TensorRt. If you want tensorrt support, the latest cuda toolkit you can use it 12.1 because tensorflow support only tensorrt 8.6 and tensorrt 8.6 can only work with Cuda Toolkit 12.1. It’s a chain reaction :D
@@TechJotters24 sir i follow the procedure with tf 2.16.1 with cude 12.1 unfortunately it isn’t identify gpu device. I switch to tf 2.15 with new env. It identify the device with many missing dependencies like cublas etc
@@TechJotters24 sir it works I don’t know the reason i just restart my system and make new conda env and just install tf and pytorch thank you soo much sir 😊
@@waqarkaigreat!!!
hello, i followed every procedures of yours, but im installing cuda 11.8 and cudnn 8.6, coz based on tensorflow's website, they should be compatible with my rtx3060.
however, i reached an issue in initializing cudnn. after running "./test_cudnn", this error pops out, when i use "ls" i can see the file, but coudnlt open, do you know how to solve this?
./test_cudnn: error while loading shared libraries: libcudnn.so.8: cannot open shared object file: No such file or directory
thank you sir
I tried to figure it out, but I don't have this GPU available to test. I am really sorry.
@@TechJotters24 no problem, although that thing didnt work out, but eventually i got it done. However, as gpu was able to be used, i couldnt import any other python libraries like cv2 or mediapipe (pip installed, pip list also shows). So i guess i just decided to give up lol. But your videos are great, keep it up.
HI, i like to pay for your service about an usage of tensorflow for AI modeling, may I please ask how do I reach out to you?
thank you
You are welcome!!!
its showing bad subsitutuion after i added tensor path....this is the error -bash: :${LD_
LIBRARY_PATH}: bad substitution
Hi, i believe the best solution is to check the directory first and remove all the bashrc/Path entries related to this and configure again.
After those installation ,How to use tensorflow in VScode ?
Hi @charliesj0129, please check this video - ua-cam.com/video/BLjuR_eAiFw/v-deo.html
somehow i still get the cudnn error:)
but the tensorrt have been solved so thank you
i pray that u r reserved a seat in heaven bro ,u saved me a ton :)
TE AMO CONCHUDO
this is classic
I wish that I have 1M account so I make 1M like to your video
Hi, is this error normal "E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered.
E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
same