Running inference with TensorFlow models using the i.mx 8M Plus NPU

Поділитися
Вставка
  • Опубліковано 1 сер 2024
  • Here we talk about:
    How to quantize a model to run it on an AI accelerator / NPU.
    We also talk about what quantization is, and why we need quantization.
    We compare the Gyrfalcon to the i.mx 8M Plus.
    And we question how many TOPs we actually need.
    Is training on the edge possible? And what about FPGAs?
    For the Hardware check out this link:
    www.phytec.de/produkte/develo...
    My Medium landing page can be found here:
    / janwerth
    Our companies main page:
    phytec.de
    Workshops:
    www.phytec.de/unternehmen/sch...
    www.phytec.de/unternehmen/kal...
    #nxp #embeddedsystems #artificialintelligence #embeddedsystemtutorial #edgecomputing #machinelearning #machinelearningtutorial #phytec
  • Наука та технологія

КОМЕНТАРІ • 1

  • @lsill2530
    @lsill2530 Місяць тому

    Does the Pollux Board support on target benchmarking using the NXP eIQ Portal?