Hi! I tried to test the GPU as shown in the video and IT DOESN'T WORK. The first error is related to " tf-nightly-gpu-2.0-preview" package which is not found by pip-Colab so there's no TensorFlow instalation. The associated error is the following: ERROR: No matching distribution found for tf-nightly-nightly-gpu-2.0-preview To fix the download I checked in TensorFlow documentation ( www.tensorflow.org/install/gpu ) and use their instalation recomendation: " !pip install tf-nightly ". That fix just the download because it still not finding any GPU. Anyone know how to completely fix it?
1. make sure your pip is upgraded to latest version 2. given this video is an year old now. Follow the instruction here: www.tensorflow.org/install/pip 3. use release version if you just want to test this code and does not required it in production. 4. make sure to select GPU in menu Runtime --> Change Runtime Type
Instead of installing " tf-nightly-gpu-2.0-preview", install "tensorflow-gpu" using "!pip install tensorflow-gpu". For the rest of the MNIST code, proceed identically as written in the video.
Hi, my session is crushing everytime with google colab because the entire ram is used(this is the eroor that i get) so i increased the ram to 25go but still having the same problem so could you please help me i need your help, thank you.
For a couple of days I been having 12.6 GB RAM available although I have Google Colab Pro+. Important to mention that I always run the next lines of code and get the same answer. import tensorflow as tf tf.test.gpu_device_name() /device:GPU:0 Is there a code that I need to run and be able to get all the RAM that is mention in the Pro+ description? Is there a way to know how much RAM have I spent and how much do I have left? I´m getting a little bit fustrated that all the process that I run go down all the time
If your model is built with tensorflow or another framework that supports TPU acceleration, you shouldn't need to change anything. Build your model as usual and the tensorflow backend will handle the optimization. You should add the code block mentioned in this video at the beginning of your code somewhere, though, just to confirm whether or not your notebook or script is properly registering the TPU.
Even If I set runtime to GPU, it doesn't execute faster. It's way too slow and after some time it crashes. I guess, it waits till the time the GPU is made available to your code. Any solution for it to work faster and utilize the GPU effectively?
@@TensorFlow How about the other notebook, "Tensorflow with CPU vs. GPU"? There's this notebook but it's not quite the same: colab.research.google.com/notebooks/gpu.ipynb
Please provide me email id for the queries . respected please tell i do not have nvidia graphics card .please tell what i can do to install the Tensorflow library
I just want to be able to allocate more than 2GB per tensor (for high resolution image classification, on a pretrained feed-forward network, without having to use 'super-resolution' image slicing) ... and a GPU with 256GB of VRAM...
Terrible explanation on TPU and I understand this is just a introductory video but it could have been better if you have zoomed in on the code so that it is viewable.
Perfectly written, edited and presented script. You're a star.
thats a virtual robot my guy you been fooled
The TPU part of explanation was not clear at all !
TPU is specific Tensorflow for COmpute acceleration
That's what I thought!
@@Desu_Desu lmaoooooo
2:50 you're not building a data generator with tf logging, that just produces lines for the log output doesn't come anywhere near your model.
3:27 -> 2019: AI can't be the Bard yet, 2023: Introducing Google Bard
Hi! I tried to test the GPU as shown in the video and IT DOESN'T WORK.
The first error is related to " tf-nightly-gpu-2.0-preview" package which is not found by pip-Colab so there's no TensorFlow instalation. The associated error is the following: ERROR: No matching distribution found for tf-nightly-nightly-gpu-2.0-preview
To fix the download I checked in TensorFlow documentation ( www.tensorflow.org/install/gpu ) and use their instalation recomendation: " !pip install tf-nightly ". That fix just the download because it still not finding any GPU.
Anyone know how to completely fix it?
1. make sure your pip is upgraded to latest version
2. given this video is an year old now. Follow the instruction here: www.tensorflow.org/install/pip
3. use release version if you just want to test this code and does not required it in production.
4. make sure to select GPU in menu Runtime --> Change Runtime Type
Instead of installing " tf-nightly-gpu-2.0-preview", install "tensorflow-gpu" using "!pip install tensorflow-gpu". For the rest of the MNIST code, proceed identically as written in the video.
I wonder what would Shakespeare thought about this.
May I know why you used input_dim value as 256 in the Embedding layer?
Hi, my session is crushing everytime with google colab because the entire ram is used(this is the eroor that i get) so i increased the ram to 25go but still having the same problem so could you please help me i need your help, thank you.
I know its too late but is it still possible to share notebook for the gpu one? I see notebook only for the tpu one.
Can anyone tell me when to use GPU and when to use TPU clearly?
The biggest problem is that, if i choose TPU or GPU the ram depletes faster and the system crashes in colab
For a couple of days I been having 12.6 GB RAM available although I have Google Colab Pro+.
Important to mention that I always run the next lines of code and get the same answer.
import tensorflow as tf
tf.test.gpu_device_name()
/device:GPU:0
Is there a code that I need to run and be able to get all the RAM that is mention in the Pro+ description?
Is there a way to know how much RAM have I spent and how much do I have left?
I´m getting a little bit fustrated that all the process that I run go down all the time
What do you need to change to adapt your code for TPU training?
If your model is built with tensorflow or another framework that supports TPU acceleration, you shouldn't need to change anything. Build your model as usual and the tensorflow backend will handle the optimization. You should add the code block mentioned in this video at the beginning of your code somewhere, though, just to confirm whether or not your notebook or script is properly registering the TPU.
Here is a great blog post 'Keras on TPUs in Colab" - medium.com/tensorflow/tf-keras-on-tpus-on-colab-674367932aa0
So when should one pick one over the other?
Even If I set runtime to GPU, it doesn't execute faster. It's way too slow and after some time it crashes. I guess, it waits till the time the GPU is made available to your code. Any solution for it to work faster and utilize the GPU effectively?
Does you need to be in the States for this to work?
I have an AMD GPU. Is there a way in which I can use lastest version of tensorflow?
ever tried plaidml?
@@deneb6139 even I have the same issue.
Can u pls elaborate what and how to use that plaidml.. please ..
I hope you'll reply. Thanks in advance
currently tf is only avail for Nvidia GPU's
@@hmm7458 Because their GPUs have TPUs.
Great video. Thank you.
Thanks for the video but at the TPU part, "tpu.contrib" has been deprecated
We can use local resources only so testing in cloud is extra work.
Is possible to run TPU calculations without Colab, for example with pyCharm?
You'd need probably need to make use of Google Cloud to access the TPU then
When to use GPU and when to use TPU?
You are working on a less complex model use a GPU , if you're working on a Large Complex model like GAN and Transformers use TPU
Its giving me error 'SystemError: GPU device not found'
any help?
it means your system didn't have a GPU
so for everyone who is thinking about tpu,,, it is just special chip designed for computing tensors
Plz add link to src
Here it is! bit.ly/2IEIaSV
@@TensorFlow How about the other notebook, "Tensorflow with CPU vs. GPU"? There's this notebook but it's not quite the same: colab.research.google.com/notebooks/gpu.ipynb
How many hours do we have for each account?
12 hours per time
Kaggle is another good resource
Great. Please hurry up and create the migration video.
Its Paige! So cool! I am a fan!
Can A TPU generated model file run on GPU
How to create Deep learning model ui without GPU on local machine
So if I'm using the super expensive hardware for free, Google owns my soul or something, right?
Sir, my concern is not whether Google is on our side; my greatest concern is to be on Google's side, for Google is always right.
IM impressed by the deepfake here . Really astonishing
i have no idea what's going on but very cool
Excellent
🔥 🔥
NN is just matrices, no wonder GPU works well
I train my models on GpuClub com and don't worry about maintaining these huge machines. No investment is the best investment...
check gpuclub com, I train my models there
@@DanOneOne ????
LEONATO.
Since thou fallst upon a summon, return 0.
Please provide me email id for the queries . respected please tell i do not have nvidia graphics card .please tell what i can do to install the Tensorflow library
i love you ma'am. you way of teaching is really nice. I an also follow to you on Udacity for machine learning
we need edge tpus 2 years ago. invent time machine then deliver your promise. stop giving us crap on youtube and github.
We also can only use Edge and must compute without Cloud or internet from within closed Fog net ao all IOT is built wrong for us.
I just want to be able to allocate more than 2GB per tensor (for high resolution image classification, on a pretrained feed-forward network, without having to use 'super-resolution' image slicing) ... and a GPU with 256GB of VRAM...
DynamicWebPaige
U did not share the notebooks. Also not showing comparison between tpu and gpu. Overall. Not a good video.
Everyone's like: wtf, I learned absolutely nothing new.
Here it is! bit.ly/2IEIaSV
@@TensorFlow thanks!
doesn't work for me
Terrible explanation on TPU and I understand this is just a introductory video but it could have been better if you have zoomed in on the code so that it is viewable.