Do you really need a GPU or NPU for AI?

Поділитися
Вставка
  • Опубліковано 30 тра 2024
  • There's no avoiding AI and LLMs this year. The technology is being stuffed into everything, from office software to phone apps. Nvidia, Qualcomm, and others are happy to push the notion that this machine learning must be performed on an accelerator, be it a GPU or an NPU. Arm this week made the case that its CPU cores, used in smartphones and more throughout the world, are ideal for running AI software.
    For this latest Kettle, our journalists discuss the merits of running AI workloads on CPUs, NPUs, and GPUs; the power and infrastructure needed to do so, from personal devices to massive datacenters; and how this artificial intelligence is being used - what with Palantir's AI targeting system being injected into the entire US military.
    Joining us this week is your usual host Iain Thomson, plus Chris Williams, Tobias Mann, and Brandon Vigliarolo.
  • Наука та технологія

КОМЕНТАРІ • 4

  • @ps3301
    @ps3301 26 днів тому +1

    Most people don't seem to know that if you play a game that requires a lot of smart ai components, npu will take on that workload from GPU and make it run smoother.

  • @petersuvara
    @petersuvara 9 днів тому

    The NPUnfeels like putting an ASIC to process bitcoin hash functions on your chip. Agree on all points here.

  • @michaelonline11
    @michaelonline11 18 днів тому

    you don't need, cpu can do it all just that it wastes a lot of energy and it would deter the development of AI in the long run

  • @briancase6180
    @briancase6180 13 днів тому

    An NPU wastes a lot of die area? Have you looked at the die plot of an Apple M-series chip? You are just wrong with that statement. The smaller the process the more difficult the routing? What? Do you understand that process progression is accompanied, typically, by an increase the number of metal layers? Memory bandwidth? Again, have you looked at Apple M-series chips? They have plenty of memory bandwidth and they use a very cost-effective design and implementation strategy. The Apple M4--the lowest-end member of the coming M4 family--increased memory bandwidth by 20% to keep pace with the clock rate. It's about 120GB/s. That's a lot for a low-end chip.