Tensor Processing Units: History and hardware
Вставка
- Опубліковано 7 лют 2025
- In this episode of AI Adventures, Yufeng Guo goes through the logistics and history of TPU’s (Tensor Processing Units) and how they differ from CPU’s and GPU’s.
In-Datacenter Performance Analysis of a Tensor Processing Unit → goo.gle/319B2DJ
Check out the rest of the Cloud AI Adventures playlist → goo.gl/UC5usG
Subscribe to get all the episodes as they come out → goo.gl/S0AS51
Product: TensorFlow; fullname: Yufeng Guo;
#AIAdventures
Excellent explanation of the TPU, especially the example of the letters registered in my mind on the difference between CPU, GPU & TPU
very awesome explanation
Thanks for the video!
nice explanation, thanks
Here because of Pixel 6 announcement. 😂
Me 2!
Great analogy of CPU, GPU and TPU =)
yeah when I heard the analogy I was like "aaah"
very nice explanation for begginers like me
Glad to hear that
TPU? More like "Totally great information for you." Thanks for sharing!
There is a deep bass that can be heard with nice headphones. AC?
Thanks!
thank you :) but
where's the link plz?
I have seen it last year and came here again for Pixel 6
Please explain this technique of quantization where Google mapped 32 bits to 8 bits in TPU v1.
My Pixel 6😍
So is TPU replacement for only GPU or it replacement for both CPU and GPU?
I think a TPU is more like a piece of hardware geared towards certain applications such as deep learning. As there should still be a need for both general computation and computer graphics, I think CPUs and GPUs should be here to stay.
TPUs are geared toward neural network machine learning.
I use Google's TPU cloud computing to process multimodal AI image generation.
Who's watching here after Google announces they will be using custom design Tensor Soc.
any higher precision?
Amazing! Which other tasks can be a target for a new specific processor?
Why is their weird sub base/ low notes in this video?
pixel 6 will use yess
Which do you think is the best for TensorFlow training models: GPU or TPU?
Well you can try this yourself. Use colab notebook. You can select GPUs like T4, V100 or A100 or TPU. I tried this experiment (but with torch, not tf, and for inference only) and got pretty disappointing results. TPUs have only 16g of memory and they were slower than the slowest GPU. Maybe TPU use less watts, maybe colab instances have suboptimal config, maybe torch performance is bad, anyway interesting experience :)
Apple silicon joining the room.
and 6 pro
This video is getting too little likes and views given the AI hype now
i was hopeing to actually see some but no just boring graphs boring how does this video get 44,000 views but our videos get 230 views over 3 years