I built a MONSTER AI Pi with 8 Neural Processors!

Поділитися
Вставка
  • Опубліковано 27 вер 2024

КОМЕНТАРІ • 226

  • @electrofreak0
    @electrofreak0 3 місяці тому +266

    Can't wait for a decade from now when they're packing 1024 TOPS into "entry-level" devices claiming "you definitely need all this power for current models"

    • @heblushabus
      @heblushabus 3 місяці тому +23

      1 BOPS?

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +86

      640 TOPS ought to be good enough for anyone

    • @tuqe
      @tuqe 3 місяці тому +6

      @@heblushabusT = trillion, so B would be billion. Next step is Pflops for Petaflops

    • @heblushabus
      @heblushabus 3 місяці тому +14

      @@tuqe oh, right. bops sounded funny tho. so, POPS?

    • @Draggeta
      @Draggeta 3 місяці тому +1

      ​@@tuqeisn't T tera in this case?

  • @jonathantribble7013
    @jonathantribble7013 3 місяці тому +54

    It's nice to know that you could run multiple neural networks on independent NPUs! Like one for facial expressions, another for voice recognition, and another for text-to-speech!

  • @tinkerscustom9568
    @tinkerscustom9568 3 місяці тому +42

    i would love a video on the home assistant power consumption!

  • @Rushil69420
    @Rushil69420 3 місяці тому +22

    Oh thank god, I’ve been itching ever since you showed that b-roll

  • @RaineyPeng
    @RaineyPeng 3 місяці тому +27

    My favorite part of this video is definitely when the box gets identified as a cell phone and he holds it up to his ear 😅

  • @realandrewhatfield
    @realandrewhatfield 3 місяці тому +7

    OMG!!! Who are you going to be from now until the next video?!?! End of an era...

  • @TheJonathanc82
    @TheJonathanc82 3 місяці тому +3

    Never stop doing what you do Jeff. Love the content, love the experimentation.

  • @markaphillips14
    @markaphillips14 3 місяці тому +4

    Jeff!!! The increase in content per week has been amazing. Don’t over due it but man I’m loving it

  • @DiamondMaster115
    @DiamondMaster115 3 місяці тому +5

    I had no idea this channel existed, this is awesome!

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +3

      and now you do, ha! this is the channel where things get crazy

    • @Biedropegaz
      @Biedropegaz 17 днів тому

      jeff sometimes is running low on iq because from time to time he post videos about making a fatality on a pc

  • @SHAINON117
    @SHAINON117 Місяць тому

    This is perfect for having multiple models working on different tasks then feeding all there results to the AI that monitors them 😊

  • @zepesh
    @zepesh 3 місяці тому +54

    That was fast

    • @inferno14142
      @inferno14142 3 місяці тому +3

      I was about to say that

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +18

      My AI predicted it

    • @neb_setabed
      @neb_setabed 3 місяці тому +5

      that's what she said!

    • @harriet-x.x
      @harriet-x.x 3 місяці тому

      ​@@Level2Jeffu okey? u look a little red :p

  • @higon99
    @higon99 3 місяці тому +1

    Oh, man. That's a visually menace pi XD
    Yesterday, you tried and SPAGHETTIBLY FAILED chaining NPU and now this. lol
    Thank you for the attempts otherwise I would have tried myself. I think 2 NPUs can easily work with running 1 neural network on each NPU. This kind of configuration can realize many real world applications I have been dreaming of so many years. Thank you again.

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +1

      Definitely! I think on the Pi 5 at least, that would probably be the ideal number of NPUs. You could stretch it to 4 okay too, but at that point the cost/build could point you to something a bit beefier like Jetson Orin.

  • @ewasteredux
    @ewasteredux 3 місяці тому

    Bravo Jeff! That was alot of work on your part. Again, congrats and thanks for all the hard work you do for us!

  • @BrianMaddox
    @BrianMaddox 3 місяці тому +2

    I appreciate that there are so many different TPUs/NPUs on the market, I’m just frustrated that we’re all beholden to nVidia when it comes to actually training models and running a lot of things.

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +3

      Ditto. Wish at least AMD could offer something that would take the bottom out on either price or efficiency, but right now it is what it is :(

    • @BrianMaddox
      @BrianMaddox 3 місяці тому

      @@Level2Jeff I’ve got a used Tesla P40 with a water cooler in addition to another rtx 2060. Now that Intel works with Tensorflow and PyTorch, I’ve seriously considered just getting a 16 gig Arc 770 and paying for cloud computing if I need to train a model that needs more memory.

  • @Bill_the_Red_Lichtie
    @Bill_the_Red_Lichtie 3 місяці тому +3

    I love the crazy/nuts thing that is this set up! And, even if the "Coral Dual TPU" only shows up as one of them, I'll bet that it is still cheaper and faster than the USB version. Now, where was that AM power meter that your Dad has . . . ? ;-)

  • @packerbackermk1
    @packerbackermk1 25 днів тому

    For anyone with these dual Google coral chips, if you would like to use both corals you can with adapters. For example the ones for sale on MBS-Shop.

  • @dfgdfg_
    @dfgdfg_ 3 місяці тому +1

    "I've created a monster!
    No one wants to see Marshall no more,
    They want Jeff.
    I'm like chopped liver"

  • @MaxHeadroomGPT
    @MaxHeadroomGPT 3 місяці тому

    Jeff, I too would love to see a video on how to setup a Home Assistant Dashboard for Power Monitoring. You sir, are a Wizard!

  • @waynefilkins8394
    @waynefilkins8394 2 місяці тому

    That's kind of like the pi supercomputers where they were hooking like 25 50 pi's together and calling it a super computer, but you have to make custom software to support the thing and take advantage of the whole cluster or w/e they call it.

  • @jobasti
    @jobasti 3 місяці тому +1

    ​ @Level2Jeff - Power Monitor Dashboard all the things - YES PLEASE

  • @awetmore
    @awetmore 3 місяці тому

    The trick to using those NPUs in parallel will be building a pipeline across them, where each NPU is supporting a subset of the layers in a model. This is a common technique in both training and inference, though I'm not sure if Tensorflow Lite supports it.
    A pipeline would allow you to partition the model weights and compute across all of the NPUs, giving you a chance to run larger models then you could do on a single NPU. Your PCIe setup is very low bandwidth, but that is less of a concern here because pipeline parallel is only sending the activations (relatively small input tensors in inference) between the NPUs, not the larger weights.
    Based on the limited information that I can find about Hailo and their sample hardware (a few of them have many Hailo chips on a single PCIe card) it looks like their software may support this.

  • @omersalem73
    @omersalem73 3 місяці тому +1

    Hi Jeff, just wanted to let you know that you CAN use multiple hailo cores together, using the VDevice API - it automatically identifies the cores (granted it will only work with hailo chips).

    • @Level2Jeff
      @Level2Jeff  3 місяці тому

      HailoRT seems to have some multi-core configuration too... definitely some fun to be had here!

  • @TheMostOrdinaryPersonOnEarth
    @TheMostOrdinaryPersonOnEarth 3 місяці тому +1

    Please make a power outlet video, I'm just getting into HASS now and power monitoring is next on my list - Also great video cheers!

  • @jeremybarber2837
    @jeremybarber2837 3 місяці тому

    I would love to see a video going over your choice of power metering smart plugs & the integration into HomeAssistant.

  • @txga2005
    @txga2005 2 місяці тому

    Extremely helpful video for beginners

  • @tuttocrafting
    @tuttocrafting 3 місяці тому +4

    We need cheap PLX chips that take a newer standard bus in uplink and can provide many downlink at lower speed.
    Also on consumer mainboard it is starting to be annoing. 20 or plus pcie gen 5 lanes where almost all hardware is gen 3 and 4.
    Lot of throughput lost.

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +2

      Couldn't agree more. Though those of us needing all that PCIe goodness are a slightly rare breed... and the answer till now is usually buy a big server CPU that gobbles up like 120W idle :D

    • @tuttocrafting
      @tuttocrafting 3 місяці тому

      @@Level2Jeff Even if we are the minority I think that the approcha that apple did with the MAC pro using the PLX and many lanes/slot is something that other HEDT/Workestantion OEM should start to consider.
      New AM5 EPYC has been annoucned but lot of bandwidth is lost if you just plug a single HBA card on today mainboards x16 slot.

  • @Flare1107
    @Flare1107 3 місяці тому +1

    How do GPU bit miners support the core splits? Some setups are looking at spreading a single calculation over thousands of cores. Maybe there's a way to port a miner task divider to NPU tasks. But I also wonder if we are still limited to running each TPUs individual Floating Point rating? Or could we run full 32-bit models?

  • @gu9838
    @gu9838 3 місяці тому

    hehe a pi mad scientist cobbling together contraptions no one thought of. love it! great scott!

  • @Brahvim
    @Brahvim 2 місяці тому

    "And I will see you, in the next one!"
    AAAAAAAAHHHHH WHO SAYS THAT?! WHO SAYS THAAAAT!??!?!
    It JUST escaped my head!
    WAIT, Chris Titus Tech does?!
    I just read the description!
    _Well then!_
    ...I think _this exact one_ is a bit common, is it not?

  • @MystikIncarnate
    @MystikIncarnate 3 місяці тому

    Hey Jeff! I'm still pretty curious about everything "AI". I'm just not sure how to take advantage of any of it right now. I'd love to see a video going over a bunch of different AI projects that these can be used for, either here or on the main channel. Obviously frigate is one, I've also seen some self hosted AI chat bots, though I'm not sure how well any of them would fare on a pi.
    I know you're the "pi" UA-camr, but I'm also curious about other applications of such accelerators. I wonder if an AI chat bot would work decently well on a 1L PC (or some similar micro x86 system) using something like the Hailo for processing rather than trying to cram a GPU in a small system like that.
    If you know of a UA-camr who is doing that sort of thing, I'm happy to check them out, just let me know.
    Keep up all the cool videos. Cheers!

  • @prince3121
    @prince3121 3 місяці тому

    Jeff the mad RPi guy! Love these builds when you push the envelope! 😎🤣

  • @AerialWaviator
    @AerialWaviator 3 місяці тому

    Being able to run two Halo8's would be a cool project as in theory would be (2x 26, or up to 54 TOPs in theory). Combined with dual Pi cameras, would offer high frame rate stereo depth of field, or other fun video processing.
    BTW: the Home Assistant monitoring is fascinating. Would be interesting in hearing more details.

  • @catsupchutney
    @catsupchutney 3 місяці тому

    Jeff, you have to write a book with a chapter on each of these types of Pi mods.

  • @TheGamesZilla492
    @TheGamesZilla492 Місяць тому

    Needs a compatible case to keep it all safe or maybe 3d printed case perhaps ,Sound like there needs to be a Part 2 to this video.

  • @stillblazinkush
    @stillblazinkush 3 місяці тому +1

    Level 2 Jeff is truly on another level.

  • @JTB_Computers
    @JTB_Computers 3 місяці тому

    Here before 20k subscribers! Keep up the great work Jeff

  • @ur1friend437
    @ur1friend437 3 місяці тому

    Love these kind of videos so keep em coming

  • @tonysheerness2427
    @tonysheerness2427 3 місяці тому

    As price drops for NPU's people will use more then software will write the software for it. That is what the raspberry pi was designed for learning in a fun way.

  • @OriNachum
    @OriNachum 3 місяці тому

    Can you show us actual runs? Also would love to see the 10H version when it comes to your hands.

  • @ThereWillBeCake
    @ThereWillBeCake 2 місяці тому

    This isn't Level 2 Jeff. This is Level over 9000 Jeff.

  • @GameDesignerJDG
    @GameDesignerJDG 3 місяці тому +1

    I wonder how hard it would be for a Pi to orchestrate all these TPUs together. Could it offload some orchestration onto another TPU? I kind of want to write some code for this thing to see how it performs with multiple processors.

  • @azertyQ
    @azertyQ 3 місяці тому

    There a Qualcomm AI accelerator that use Dual M.2 slots, that board could be perfect for it (I've never been able to find the spec for dual m.2, so the spacing might be off), just one of those could get 200TOPS/25W
    Good luck finding one though...

  • @w13rdguy
    @w13rdguy 3 місяці тому

    I know you won't tribute aVe, but, that would be hilarious!😂

  • @Xamz_pok
    @Xamz_pok 3 місяці тому +1

    Can these boards be plugged into a full-fledged PC?

  • @asteriondaedalus6859
    @asteriondaedalus6859 13 днів тому

    8:20! Google doesn't support the software for the Coral?! Is that at all? Or are you meaning on PiOS? Or?

  • @novantha1
    @novantha1 3 місяці тому

    So like, objectively I know that the Hailo isn't really meant for inference of production grade LLMs or anything.
    ...But like, I still want to see if a person couldn't do something silly like a bespoke MoE architecture with 128M active parameters and still get okay quality and speed.

  • @chrisbiddle3674
    @chrisbiddle3674 3 місяці тому +2

    Great job Jeff, you achieved 55 flops. Also today, NVIDIA released GB200 NVL72 platform achieving 1.4 exaflops/tops 🎉

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +1

      Heh, Nvidia costs a wee bit more too :)
      But the more you buy, the more you save!

  • @gendalfgray7889
    @gendalfgray7889 26 днів тому

    Can those neural chips be used for something custom other than video recognition?
    Hi, do you have an idea on how to utilize 2 TPUs of that double coral chips?
    I want to make similar board for deskyop PC, how to do that?

  • @turnkit
    @turnkit 3 місяці тому

    before the WHAT it's compelling to know the WHY.
    waited for the payoff but I guess I don't get enough why I'd want to use all these TOPs
    I'm excited about using new tech but using it is the key. The excitement about the build comes after the excitement about the functional power. So I guess I don't get this video too much.

  • @ws01212
    @ws01212 3 місяці тому

    I really like your videos they always bring novel information
    Do you know when it will support running NVMe SSD and Hailo AI Kit simultaneously? (with the NVMe SSD used as the system boot disk)

  • @sirkingjamz101
    @sirkingjamz101 3 місяці тому

    You choose to switch "geerlings" and it seems it paid off for the better :)

  • @saiyantwan
    @saiyantwan 3 місяці тому

    think you might be able to look over the nanopi boards since most have built in NPU? I know the Nanopi R5S has one. Not that much of one but still there

  • @jameswarnock5655
    @jameswarnock5655 3 місяці тому +2

    I had trouble getting my coral to work with codeproject ai long-term. It would work for a few hours but then stop responding. I don't know where the issue was on that. I mostly just gave up on it.

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +2

      I had a similar issue in my testing, though didn't take too much time to debug it. One time it seemed to lock up the frontend of the Pi, had to force-poweroff!

  • @Sintrania
    @Sintrania 3 місяці тому

    We have level 1 tech,level 2 jeff what’s next level 3 steve 😂

  • @phischtv4497
    @phischtv4497 3 місяці тому

    What's the best lowcost solution today for running local image-detection CNNs on a Pi4 or Pi5? Those USB-TPUs?

  • @RickySupriyadi
    @RickySupriyadi 3 місяці тому

    so Google TPU might be the LPU arranged in certain way? doesn't seems so, because LPU have huge memory chips... well the tops seems matched I wonder where this experiment will go into this is really interesting. Really interesting! indeed!

  • @nekomakhea9440
    @nekomakhea9440 3 місяці тому

    I wonder how feasible it is to put an abstraction layer in front of the 25 coral TPUs so that the software only sees one big TPU?
    That's more-or-less what RAID drivers for ZFS and BTRFS do so clusters of 25+ drives appear as one block device. And what CPUs do to make scalar code think it's still being executed in-order as the only process on a single-threaded CPU in a virtual memory space, despite out-of-order execution, superscalar execution, hyperthreading, multithreading, branch prediction, and more, all happening in the background.

  • @redactedofficial
    @redactedofficial 3 місяці тому

    Hell yeah, finally you dont just tease😂 i was waiting soo badd, am i rpi addicted🤨

  • @tinkerscustom9568
    @tinkerscustom9568 3 місяці тому +1

    thanks for the video!

  • @avibank
    @avibank 3 місяці тому

    Might be interesting for scientific computing. Can you send instructions via MPI or something?

  • @farzadb82
    @farzadb82 3 місяці тому

    @level2Jeff Where did you buy the Hailo-8 card? On their website they only provide an option for product enquiry, not for purchase.

  • @guy_autordie
    @guy_autordie 3 місяці тому

    You need that 12xm.2 card with a 16x connector and put it in your ampere workstation.

  • @DJWESG1
    @DJWESG1 3 місяці тому

    first thing i did with chatgpt was to have it write the code to house a llm on a local raspberry system (with 4 units and 1 master)

  • @iammahabub
    @iammahabub 2 місяці тому

    can we add this PCIe HAILO AI chips (i.e. 50+ TOPS) in our Mini PC?
    I meant, If we want add it to NUC or Dell Opt. Mini PCs instead of Raspberry PI

  • @HamguyBacon
    @HamguyBacon 3 місяці тому

    can this be used for llm's and stable diffusion or is it only useful for video tracking.
    one day graphics cards will be replaced by AI accelerator cards, and all you'll need is a low power gpu.

  • @isaacyonemoto
    @isaacyonemoto 3 місяці тому

    Arent they all sharing memory bandwidth? Are there any AI m.2s with onboard memory?

  • @Information_Seeker
    @Information_Seeker 2 місяці тому

    software support for parallel processing huh, something similar to Bend programming language?

  • @SnapWireOnlyOne
    @SnapWireOnlyOne 3 місяці тому

    You don't need to make software for them to all work together just create something in python and back off it. you know what have you using the python multithreading library to trick it into multithreading ?

  • @thk1523
    @thk1523 3 місяці тому

    Very soon, we will have an external NPU device connect via usb 4 just like eGPU 🚀💻

  • @SLLabsKamilion
    @SLLabsKamilion 3 місяці тому

    Marco Reps at the end?

  • @jacquesdupontd
    @jacquesdupontd 3 місяці тому

    Hey, thanks a lot for your videos. I'm wondering something that i'm sure has been adressed but i can't find a definitive answer on this. Does TPUs accelerate local LLM answer generations (Ollama for example) ? Thanks

  • @HannesZietsman
    @HannesZietsman 3 місяці тому

    but the real question is can it blend? what can one use this for?

  • @Thorhian
    @Thorhian 3 місяці тому

    This channel name sounds suspiciously like Level 1 Tech. Pi’s instead of massive and exotic server and workstation hardware?

  • @andre-le-bone-aparte
    @andre-le-bone-aparte 3 місяці тому

    Question: Oobabooga (text-generation) Web GUI supports multiple GPUs and TensorFlow (TPU) with mixing + matching - would that work for your setup?

  • @jeremybarber2837
    @jeremybarber2837 3 місяці тому

    Oh man… this makes me think of how great a CM5 board akin to the CM3588 NAS board from FriendlyElec but for NPUs would be. Wait… could you just use that board for NPUs as is?

  • @Chapbook
    @Chapbook 3 місяці тому

    What is the best combo for pi hat AI NPU and also running an NVMe SSD all together?

    • @Level2Jeff
      @Level2Jeff  3 місяці тому

      probably either the NVMe BASE duo from Pimoroni or the dual NVMe board from Pineboards right now.

  • @CoreDreamStudios
    @CoreDreamStudios 3 місяці тому

    Great information in the video. Is there a PCI-e board like this that would work in a desktop, without buying the Nvidia RTX 40 series?

    • @Level2Jeff
      @Level2Jeff  3 місяці тому

      Hailo makes a 200+ TOPS 'Century' card that straps a bunch together and would fit inside a desktop case (full height PCIe card).

    • @CoreDreamStudios
      @CoreDreamStudios 3 місяці тому

      @@Level2Jeff Thank you so much. 🙂

  • @anonymousshoe842
    @anonymousshoe842 3 місяці тому

    Btw what's the SSD and WiFi card for?

  • @Dylan_Lanckman
    @Dylan_Lanckman 3 місяці тому

    maybe 3D print a support for it?

  • @Dexter101x
    @Dexter101x 3 місяці тому

    Shame that Hailo uses the PCIe slot, because I have a NVME bottom using that slot

  • @grantwiersum7394
    @grantwiersum7394 11 днів тому

    Does dask work? I'd be really interested if it doesn't.

  • @gamereditor59ner22
    @gamereditor59ner22 3 місяці тому +1

    Interesting, but cool!

  • @davesaquarium4825
    @davesaquarium4825 3 місяці тому

    Have you ever worked with a jetson nano by nvidia?

  • @大支爺
    @大支爺 3 місяці тому

    VS 4090 directly.
    FP8 16 32?

  • @ssteele1812
    @ssteele1812 3 місяці тому

    Hello Jeff. Odd question here. I have a Lenovo X230 that I will repurposing to "play" with local AI, specifically LLMs. It is going to struggle with the onboard GPU and I will eventually be moving everything to a different machine that will let me add a proper GPU card. Until then, is there a way that I could use a PCIE slot to NVME/M.2 adapter to put one of these little M.2 AI chips in the laptop? Since the machine has USB3 ports on it already, there isn't really anything useful I can put in the slot. If the adapter actually worked, would the extra AI board do me any good without custom software to utilize it?

    • @Level2Jeff
      @Level2Jeff  3 місяці тому +1

      Yes, at least under Linux. Not sure about Windows support for these things.

  • @HiddenPalm
    @HiddenPalm 3 місяці тому

    Soooo does it like tell the time?

  • @jaffarbh
    @jaffarbh 3 місяці тому

    Even when using Nvidia big GPUs, running inference on multiple cards is not a good idea due to the communication overhead. Don't even try to do this on the RPi 😁

  • @VandalIO
    @VandalIO 27 днів тому

    Can it run cyberpunk with this accelerator card

  • @RobertFabiano
    @RobertFabiano 3 місяці тому

    thanks for tinkering

  • @marsovac
    @marsovac 3 місяці тому

    Why do we need TPUs for object recognition? I was doing embedded recognition sotware for cameras that run basically on a Pentium 2 200Mhz hardware just fine. Of course classification and training of the algorithm was harder than just AI traning, but it run on basically anything. Today we need a special chip to draw a rectangle around you on the camera, while at the same time the ARM cpu can easily do that. Is it because the algorithms are easier to use and not proprietary?

    • @Level2Jeff
      @Level2Jeff  3 місяці тому

      You can do that on the Pi's CPU too-in fact it's nearly as fast for some tasks as the Coral itself. But the AI accelerator chips allow you to process hundreds+ items at a time at 30+ fps, which can be helpful in many places. Or just a few items or a couple camera feeds at 100+ fps, which is useful for high speed robotics, sorting, etc.
      Not to mention it frees up the CPU to do other tasks with the data sorted by the NPU. Still, it's overkill for a lot of uses!

  • @digitaleswerken
    @digitaleswerken 3 місяці тому

    I guess it's going to be less then a year until one the first NPU is going to fix the code for another one.

  • @bruce7442
    @bruce7442 7 днів тому

    so, pi5 could use more than one hailo 8 at same time

  • @shephusted2714
    @shephusted2714 3 місяці тому

    you are saying hold back while other people are saying the trillion dollar cluster is on the way - what would be nice is to combine a cluster of pi each with 25 top into one like nvidia does with its cards when they are in a cluster

  • @zambonidriver42
    @zambonidriver42 3 місяці тому

    Kernel recompile?

  • @AbdelkaderBoudih
    @AbdelkaderBoudih 3 місяці тому +10

    Stop posting those AI generated video. I know it not you because you didnt recompile the kernel.

  • @ryamelp
    @ryamelp 3 місяці тому +1

    I know midnightlink irl

  • @echobucket
    @echobucket 3 місяці тому

    Can you run LLMs on these NPUs?

  • @Kaenguruu
    @Kaenguruu 3 місяці тому

    PCY though?

  • @StarcoreLabs
    @StarcoreLabs 3 місяці тому

    Find someone who makes custom boards. Try to collaborate to make one that can handle all the TOPS you want.

  • @event-zero
    @event-zero 3 місяці тому

    Put it in a pc and run ollama on it just to see how powerfull it can be.