Hello. Thank you for your response. I successfully managed to get Tensorflow 2.16.1, Cudatool 12.3.4.1, Cudnn 12.8.9.7.29, and Python 3.11.9 versions working together smoothly. I am pursuing this as a hobby and am relatively new to the Linux environment. Yes, I am finding it a bit challenging, but it is enjoyable. Currently, I am encountering a common error: "could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node. Your kernel may have been built without NUMA support." This error or warning has not posed any obstacles for model training or testing. I have installed the system on Ubuntu 24.04. Best regards.
@@TechJotters24 I couldn't fix the NUMA warning, but there was an error before that prevented TensorFlow from working. I set a parameter to zero, and TensorFlow started working. When I asked ChatGPT, it told me that I needed to enable the NUMA parameter from the BIOS. I checked my BIOS, but I couldn't find anything like that. So, this needs to be researched. It seems that NUMA is related to the motherboard, but I'm not sure how accurate that is.
Thanks for the good video! But I have a question: how can I install cuda 12.4 on my ubuntu 24.04, if they only have a version for ubuntu 22.04 on the official site. I need exactly the cuda 12.4 version because it is the latest version that pytorch works with.
Hi, PyTorch will work but Tensorflow will not. But I find a way to run tensorflow with conda, which will install all the gpu dependencies and run tensorflow gpu. But remember, it'll not use the latest versions. conda create -n tf-gpu tensorflow-gpu conda activate tf-gpu python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
Thank you 🙂
YOU ARE SUCH A LEGEND MAN!
Thanks
Hello. Thank you for your response. I successfully managed to get Tensorflow 2.16.1, Cudatool 12.3.4.1, Cudnn 12.8.9.7.29, and Python 3.11.9 versions working together smoothly. I am pursuing this as a hobby and am relatively new to the Linux environment. Yes, I am finding it a bit challenging, but it is enjoyable. Currently, I am encountering a common error: "could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node. Your kernel may have been built without NUMA support." This error or warning has not posed any obstacles for model training or testing. I have installed the system on Ubuntu 24.04. Best regards.
Great. You can ignore This NUMA error!!
@@TechJotters24 I couldn't fix the NUMA warning, but there was an error before that prevented TensorFlow from working. I set a parameter to zero, and TensorFlow started working. When I asked ChatGPT, it told me that I needed to enable the NUMA parameter from the BIOS. I checked my BIOS, but I couldn't find anything like that. So, this needs to be researched. It seems that NUMA is related to the motherboard, but I'm not sure how accurate that is.
I think it’s better to avoid Numa working. You did a great job. Can you share the process you use to configure tensorflow? It’ll be great.
Thanks for the good video! But I have a question: how can I install cuda 12.4 on my ubuntu 24.04, if they only have a version for ubuntu 22.04 on the official site. I need exactly the cuda 12.4 version because it is the latest version that pytorch works with.
Will TensorFlow and PyTorch work with these versions?
Hi, PyTorch will work but Tensorflow will not.
But I find a way to run tensorflow with conda, which will install all the gpu dependencies and run tensorflow gpu. But remember, it'll not use the latest versions.
conda create -n tf-gpu tensorflow-gpu
conda activate tf-gpu
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"