0:53 - My brain must've been turned off when I was making that slide. The Coral does 4 TOPS, not 2. And the Hailo-8L is more like 3-4 TOPS/W, not the 8 I put in the slide. Sorry about that, wish I could edit that bit in the video but UA-cam doesn't have any mechanism to allow it! Thanks to @thijsp for pointing out my glaring mistake! Good night, I'm getting some sleep-it's morning in the UK, middle of the night here ;)
Yeah, I figured this video released when it did because of the Pi Foundation. Saw 3 5-10 sec vids about the AI stuff from them just after your vid released.
Hey Jeff, question, could the Halo be used to accelerate video upscaleing? I just started diving in to it and I'd love to find a more efficient way then running my 2080ti on full speed days on end 😅
Love your attitude towards the AI hype. Much of it seems like a manager or executive's idea of what problems people have, like forgetting what they were doing on their computer an hour ago or summarizing emails
Recall is an excellent ide but not for windows but for camera glasses. You would never had to look for anything again, not only on windows. The only condition I would use such feature is a machine with no internet connection. All updates would be done via sd card.
I really like the idea of recall. I have a very bad memory and I run a company. I spend a huge amount of time documenting what I've done and ensuring it's searchable. I put it all in a database and have my own LLM able to search it for me in context. It works well enough to be a huge time saving. You tend to hit issues where you're feeding it a lot of context and I don't know how MS can get around that with small, locally run models. What MS is doing is my system, but on steroids. The big downside that means I'll never use it is I don't trust them. My inference server was built by me, configured by me and the data on it is controlled by me. Microsoft have been cracking passwords to people's compressed archives stored on one drive to analyse the contents. The potential of their recall system for really busy people is huge. The risk associated with it is also huge.
On Trashfuture their guest suggested that systems like recall exist to collect training data. So like if they have a huge library of user interactions they can train some kind of new and potentially very useful type of model. But they should be upfront about that! Ask people to join a testing beta or something. Saying “here I made this for you” and it’s just a surveillance machine is super weird.
One of the quotes i heard that i really liked was "Its only called AI till it has an actual practical use. Then we just call it what it is". Think things like automatic captioners or OCR tools, or some of the earlier tools of CGI.
I guess that Mozilla plans to have local "AI" to create alt texts for images without native alt texts on PDF and web pages visualized through Firefox will have to find a new name if this is correctly implemented. As it solves an actual accessibility problem. (Eg making those images "readable" by a screen reader.)
So called AI comes actually from early OCR applications many decades ago. The principle concept is still the same namely statistical programming. AI sounds more catchy though in terms of marketing.
If it's written in Python, it's Machine Learning. If it's written in PowerPoint, it's AI. I'm an AI apologist, but I want people to research it and find useful tools, not just jump on the hype train. I hold the same feelings about NFTs, a key that is nearly impossible to forge is massively useful if only we could look past selling monkey pictures.
@@JeffGeerling hah, you should've heard the main keynote at this year's Redhat summit....literally every second word was AI. They seem to be the experts, ha-ha
Jeff: don’t forget that you can use a Coral TPU with each USB port as well. A couple of years ago I attempted to use a webcam and coral TPU (based on a UA-cam tute) to use eye-gaze and muscle twitch, to control a mouse … to control a PC as a communication device for my late wife who had ALS. However, a much more sophisticated (and $$$$$) device with integrated software was loaned to her by our state ALS support team before I completed it. However, there's a practical use for AI and a raspberry 5 and AI device for probably 10% of the cost of what they provided on loan.
There are many... My internship back some 7y was research on using SSD512 (YOLO v2(3?) was faster but worse) for machine vision in industrial process inspection, because the price of "all in one" cameras for that field was... eye watering. When a "dumb" model trained in 3 days on an off the shelf GPU on a quite low amount of data coupled with a not fancy "basic" camera totally outperformed the VERY expensive (won't name manufacturer) camera with all it's bells and whistles and a painstakingly hand crafted software solution there was silence in the room. The writing was on the wall. Guess that's why I'm not all that fazed with the current AI hype train. It's not really news. It's like a DC motor. Once you had them, you just had to keep finding more ways to use them. And getting more data and beefier machines to train on ;)
Did you still have the project on hand? It might be interesting to have that extra level of control. Might even make Tony Stark type holograms possible...
@@ErazerPT I had basic object recognition and (slow) tracking a bit earlier, running on what best could be described as a heterogeneous i2c "network" of ARM chips, there was a kit with bits of it made that one could buy for a while, and then integrate into/onto an OS hacked Lego NXT to stick onto whatever, in my case a sorta robot platform built of Lego combined with PCB's, screws and bits of alu. The end result could chase a specifically colored ball around, or explore an environment, map it, and search for a specific item - item in this case defined as something it could recognize at 'decent fps' which means about 1-2 at least. If it took more than a sec to recognize something it would move on, it mostly did pick up the 'tagged items' in about half a sec or so. But, the point is that in a funny way my little NN AI Legobot is version 0,001 of the Terminator, looking for its John Connor LOL. I did for a short while program it to try to run over the tagged object in a bid to "terminate" it, but it kept flipping over and the IR sensors of the bot bent out of alignment - so I changed it to just beep instead.
The killer app for the Co(Pi)lot around Geerling Engineering will be to use pose detection to sound an alarm and activate a water cannon whenever Red Shirt Jeff is about to do something that would release the magic smoke.
This is exactly what AI should be used for, not constantly watching what your doing on your PC and not telling you to eat "at least one small rock per day" 🤣 Glad to see that the RPI foundation is tapping into the best form of AI tools, the ones that are actually helpful and fun, all at a fairly low cost.
This looks great! For my farming robot I’m planning to add eight raspberry Pi HQ cameras to four Raspberry Pi 5’s, and I originally planned to send those feeds over gigabit to an Nvidia Orin. And while centralizing has value, I could perhaps use these with some decentralized processing. Really cool to see raspberry pi focused on real world automation. That’s what I’m doing! I don’t need another chat bot I need computer vision for my farming robot.
"i dont think a farmer wants a 4090 taking up 600 watts on his tractor." bro you realize a 580 HP diesel tractor is around four hundred THOUSAND watts....lmao
Man, calling out that some AI is "a solution in search of a problem" is insanely true. It's unfortunate that many people in tech have that mentality; when they can't think of a good problem to solve they just build something and see if anyone wants to buy it. Thank you for being the internet's voice of reason!
You KNOW we want to see all these on the Alftel board! Would be interesting to see the software support for multiple NPUs in general. Wondering how useful a (future) CM5 based carrier with a PCIe switch and 4-6 m.2 slots for NPUs would be. Would probably run out of RAM or network bandwidth trying to do anything useful, maybe. I think my 8x4MP cameras at 10FPS is only like 55Mbps into Frigate.
glad to see the tech tubers talking about how we can use our own AI, I love Ai and wish more youtubers talked about the hardware and how to install it. it is a little annoying how people don't undertand how revolutionary AI is compared to something like NFT's that had no practical use case
without lidar, structured light, or imus, i doubt it would be accurate enough to be useful. If HD lidar and structured light sensors get cheap though they also need AI models to do tracking and this would become useful Honestly i think probably printing out paper QR codes or something as trackers is probably a good solution, as we're pretty good at tracking those already
Well, AI can do all manner of things but beyond specific edge cases there isn't much it can do well enough to be worth it. It's often slower and more expensive at doing a bunch of things that it has been tried to be used for. But for example fraud/theft detection by analyzing bank transaction patterns is one thing it has been found to do quite well at that I know of.
Another vote here for whisper running on this, I have whisper , llama3 and dog doing speech -> text -> prompt processing -> text to speech on a 3060 GPU with 8Gb RAM and it works really well, but uses alotta watts. Would be great to run this sort of stack on a pi (or multi pis - still way less power than a PC + 3060)
I look forward to seeing the AI development, especially for mobile offline use (I swear im totally not an AI which replied to this video after a minute). P.S. Love the shirt :D
I think Raspberry Pi are aiming for the AI kit to be as much of a teaching tool as they are something useful. One of their newer lesson activities is about training your own model to identify apples vs tomatoes, only they get you to train it on just green apples then ask you why it identifies red apples as tomatoes. I suspect there will be learning material to follow shortly about pose estimation with the AI camera, and running models locally with the AI kit.
3:28 that is right. i ran ollama and stable diffusion on my r630 in cpu mode with 44 cores available and 512gb ram over 8 channels and a full bandwith pcie 3.0 x16 nvme. ollama is relative responsive with 10 to 20sec per answer and stable diffusion is with 8 min per image with 768x768px and a inferens steps of 150 also fast enough for me.
OK! Ordering an AI Kit and a Pi 3 camera. Now, I need to wait until some one takes this Pi AI Kit and programs it for License Plate recognition and storing of a picture/plate in a dB. This will be the last added feature of my "SMART" brick mailbox that I am working on. A camera on each side with an IR light strip behind glass blocks around the top of it. Since I live on a corner lot I get a lot of traffic and the local police come over and ask if they can view my home security camera video for a specific time frame or I send them a video. They say it has been very helpful. With cameras and license plate recognition should be interesting. I do not have to worry about neighbors complaining as they all consider it a positive thing and call to check to see who was at their front door during the day. You have to love technology!
RPi foundation should have done OpenCL support on the GPU before the AI kit. Some applications need just a few TOPs and not being able to use the gpu is such a bummer. It's unused, already paid for hardware. So much for the 'champion' of software support of sbcs.
I was going to say I have a security camera making clips of motion already without the need for AI but the example at 4:30 gives a clear example why even this little m.2 ai chip is a huge aid for this stuff, that's quite an improvement in detection already.
last year i worked for a company builder tractor attachment. we made a weeder that used yolov8 with a rtx 2080 ti in an intel nuc to detect plant at high speed. i guess it consumed about 600w but no one cared because all the motors consumed way more. time is money, so speed was way more important than a couple of watts. so yeah, farmers use gpus for machine vision/ai. :)
@@JeffGeerling but the other side is also true. I've got a new project trying to detect eggs with something like the espcam. To do active load balancing on belts to save time en energy. It's all about the right tool for the right job
This is interesting, I've seen a lot of really cool things people are starting to do with home assistant voice assistant using an AI platform. Something like this could feasibly make that completely local, which is where I begin to get excited
To be honest this is better than i guess spending money on nvidia jetson based board yes it may be faster but a raspberry pi is a raspberry pi lots of support and this is awesome thanks for covering this jeff
What an incredible pine-tree you have created. I would personally like to see where this is going. It would be great to get a reasonable LLM working well on some sort of an SBC monstrosity for not a ton of money. I really do think the use case would justify the means provided the cost savings is there.
Great video, Jeff! It's actually always useful to find another way something doesn't work (see Edison). This looks like fun to play with. Maybe partner with Electro-Boom? He loves to let the magic smoke out!
Sure...atm it seems more interesting than the external gpu pursuit and, I think down to Earth examples of (blank) will be good to learn/get back into I.T. Thanks, Jeff!
Birds? I like how the Hailo setup using that YOLOv5 model example is somewhat more sure than not that your 4 channel scope on the shelf is a TV set. The dataset is "still a kid" and have only seen a TV...yet! 🙃
@JeffGeerling The Pi does have enough CPU to do detection and just needs to be set up to frigate on overdrive. You can do a low computational alg such as 'Frame Differencing' that is a movement and tracking alg then forward a letterbox centered cut of each object. You don't need to be told its a Cat at 60 FPS as you could even go lower than 1 FPS and you are still tracked and detected. If you ever fancy kickstarting something rather than a all-in-one such a frigate creating a container system for each algs to forward frames to upstream stream bigger models would allow much more flexability for users to mix and match a cascade of containers that record timebased metadata in some form of SRT file. Often an alg such as a Yolo is processing full 1080p whilst you could be cutting and scaling based on the output of a low computaional movement & tracking alg
Steve Ballmer - Developers! Developers! Developers! Developers! Developers! Satya Nadella - AI! AI! AI! AI! AI! This does look really cool though. I so badly want to replicate that Frigate setup you have Jeff.
Perfect brief on the AI hype. I'm positive to it but I was using CoreML in 2017 to get iPads to recognize snow board ramps for MTNDEW. The marketing and laymen love to fluff things up but from vision to some of the agent work now with LLMs, there are actual use cases..which said laymen will latch on a bit too hard too 😅
Great video! There's not many out there right now so thank you for making this. I'm quite interested in seeing its performance running LLMs via Ollama. I tried with just a RPi 5 alone, but it takes the vision model LlaVa about 5 whole minutes to interpret an image! I'm hoping the AI kit can improve that vastly. Have you given it a go yet?
The amount of useful info you dump on your videos is amazing. Plus you have to deal with health so brain turn off and some typo is nothing to write home about. always a peasure to get a video from you. You're one of the great content producers of our time.
Fantastic. Will need to take look to see what documentation Pi Foundation has published, and what frameworks supported. Am curious how Pi Foundation plans to make AI approachable by hobbyists and educators. (perhaps an interview/chat possibility? ;) Awesome shirt BTW.
Yeah; I've followed along with the Hailo-8 for the past year or so, but it was a struggle to get it going at first, then it's been annoying having dev stuff locked behind a registration wall... still hoping that goes away on Hailo's end, but at least the Pi stuff is all open/free to access without registration now! It seems like a lot of the software side was finally published over the last week, which really made it easier to get things working versus a few months ago, where only a few people on the planet seemed to have the right incantations.
I would like to see you try using Whisper ASR(Voice recognition) with this AI board. It would cool to have voice recognition that doesn't use the internet.
Planning on grabbing an AI Kit. Need Language Model and Generative AI support on my Raspberry Pi. Gonna be using it as my own home server to replace Github, Google Drive, and the many cords and wonky methods of transfering(or losing) files between devices. I don't need a lot of AI power, but just enough to actually get started. If it takes months to fine tune, and hours for a single image, that's fine. I'm patient.
What i would love to see are some hardware solutions for Video encode/decode outside of the built in H264, something that works with Plex or Jellyfin for example so that workloads like Sonic Analysis and real time encoding can be offloaded. I know a GPU is obviously an answer, but specifically something like one of these devices where it is not intended to replace the GPU but become a dedicated compute device for crunching the workload, this would be awesome.
I kinda dig the multi-external card approach and wished it was the way forward, it gives us the choice if we want to install NPUs or not and how much of it into our PCs. I loathe the fact that vendors force it into their CPUs/GPUs, wasting precious die space, driving up cost and taking up power budgets as well especially in laptops. Imagine a fully fledged Tensor card from Nvidia that you could plug in separately from you GPU, rendering DLSS on a different PCIe slot while making both chips smaller and more efficient!
I'm really pleased to hear your take on AI. When I was finishing up high school (quite a while ago, now), I had intentions of doing cryptography as a career path. I didn't end up doing that, and I'm kind of glad I didn't; nowadays, when I say "I was very interested in crypto", I have to make sure I add the -graphy to the end. The same thing is happening to people who have spent their lives working on AI (generally) ML (less, though still fairly, generally). Their entire field is being boiled down to spicy auto complete, and it makes it very difficult to do "real" ML work unless you find some way to incorporate the letters LLM into your grant proposal. As an aside, as the crypto hype dies down, I'm considering going back and doing a master's in cryptography. Hopefully the folks like me but in AI don't have to wait as long as I have.
Excellent, loved your video but thinking about my requirements. I think I’d probably better off stopping up 200 bucks for plus for the Hailo card. Thank you
You are a cruel Person ! First I thought I was on ElectroBOOMs channel. I really enjoy watching your experiments, thats the kind of fun i am missing often, thank you !!
That is a big thing people seem to forget about NPU's versus GPU's. NPU's are more efficient in so many ways for "AI". Then when you compare how far NPU's have come in a short time versus GPU's that have been around 25+ years, little things like Hailo seem exciting for the future of machine learning. And yes, of course we want to see you make more abominations >:D
I would like a simple AI booster that could host models on-device, that could be used with computers that miss a decent enough NPU. Preferably just plug-and-play via USB, and possible to plugin as server where applications want to hook up to chatgpt, image generation model or whatever it might be. I just would like to see some standardization of the different interfaces so you don't have to have support in every piece of software for whatever you are running.
While I don't need or have the cash to burn on something like this, it's great that you show how to do the same stuff without corporate bloatware and other craptastic design flaws. It's also 100% more useful than failed 10yr old wearable tech they shoehorned an LLM into like the Humane "AI Pin". Also, great job on putting more thought and effort into making something useful with AI than either Google or Microsoft are in their designs.
I tried Frigate on a Pi 5 with a usb edge tpu and while the detection worked OK, my 6 cameras were killing the Pi cpu with ffmpeg processes. I went back to my little NUC clone which handles it all with cycles to spare.
This is kinda cool. as much as I dislike all the AI hype and the dystopian things that are being pushed, AI and machine learning can be kinda interesting. And I am not a huge fan of new tech for the most part but it could be made interesting and useful without being intrusive and dystopian. I just wish more people and companies would think things through and implement new tech properly without the bad stuff, but we all know that wont happen.
0:53 - My brain must've been turned off when I was making that slide. The Coral does 4 TOPS, not 2. And the Hailo-8L is more like 3-4 TOPS/W, not the 8 I put in the slide.
Sorry about that, wish I could edit that bit in the video but UA-cam doesn't have any mechanism to allow it! Thanks to @thijsp for pointing out my glaring mistake!
Good night, I'm getting some sleep-it's morning in the UK, middle of the night here ;)
Reminder to pin your errata comment ;)
Yeah, I figured this video released when it did because of the Pi Foundation. Saw 3 5-10 sec vids about the AI stuff from them just after your vid released.
Hey Jeff, question, could the Halo be used to accelerate video upscaleing? I just started diving in to it and I'd love to find a more efficient way then running my 2080ti on full speed days on end 😅
hoo full body vr thanks to the raspbery pi ,great!
what is the possibility of running a small 3 billion parameter LLM for real time text generation ?
Love your attitude towards the AI hype. Much of it seems like a manager or executive's idea of what problems people have, like forgetting what they were doing on their computer an hour ago or summarizing emails
Recall seems like one of the worst possible ideas to highlight as a feature. Not sure what Microsoft was thinking on that one!
@@JeffGeerling Same here. Biggest concern is privacy. Where is that data really going?
Recall is an excellent ide but not for windows but for camera glasses. You would never had to look for anything again, not only on windows. The only condition I would use such feature is a machine with no internet connection. All updates would be done via sd card.
I really like the idea of recall. I have a very bad memory and I run a company. I spend a huge amount of time documenting what I've done and ensuring it's searchable. I put it all in a database and have my own LLM able to search it for me in context. It works well enough to be a huge time saving. You tend to hit issues where you're feeding it a lot of context and I don't know how MS can get around that with small, locally run models.
What MS is doing is my system, but on steroids. The big downside that means I'll never use it is I don't trust them. My inference server was built by me, configured by me and the data on it is controlled by me.
Microsoft have been cracking passwords to people's compressed archives stored on one drive to analyse the contents.
The potential of their recall system for really busy people is huge. The risk associated with it is also huge.
On Trashfuture their guest suggested that systems like recall exist to collect training data. So like if they have a huge library of user interactions they can train some kind of new and potentially very useful type of model. But they should be upfront about that! Ask people to join a testing beta or something. Saying “here I made this for you” and it’s just a surveillance machine is super weird.
One of the quotes i heard that i really liked was "Its only called AI till it has an actual practical use. Then we just call it what it is". Think things like automatic captioners or OCR tools, or some of the earlier tools of CGI.
I heard it as "if it works, it isn't AI".
I guess that Mozilla plans to have local "AI" to create alt texts for images without native alt texts on PDF and web pages visualized through Firefox will have to find a new name if this is correctly implemented. As it solves an actual accessibility problem. (Eg making those images "readable" by a screen reader.)
Like religion.
So called AI comes actually from early OCR applications many decades ago. The principle concept is still the same namely statistical programming. AI sounds more catchy though in terms of marketing.
If it's written in Python, it's Machine Learning.
If it's written in PowerPoint, it's AI.
I'm an AI apologist, but I want people to research it and find useful tools, not just jump on the hype train. I hold the same feelings about NFTs, a key that is nearly impossible to forge is massively useful if only we could look past selling monkey pictures.
"Ah yes, with our new AI-POWERED DNS RESOLVER, all your problems are over!" *evil laughter*
hehehe
You never know where it takes you
YES
A fuzzy dns resolver … resolves names with varying levels of certainty
AI-powered DNS changelog "best of": Day 34, Round Robin replaced with Ubiquitous Pigeon - named thus because it s**ts on everything.
4:20 “Enough about AI“ - music to my ears.
It is so hard to say AI, especially multiple times in one video... not sure how the corporate presenters do it!
@@JeffGeerling "AI will make you RICH and SEXY, after you give US all your moneys!" Rolls right off the tongue.
@@JeffGeerling They have AI to do that for them
@@JeffGeerling hah, you should've heard the main keynote at this year's Redhat summit....literally every second word was AI. They seem to be the experts, ha-ha
Jeff: i want all the TOPS
RaspberryPi: We don’t recom…
Jeff: ALL THE TOPS!!!
Jeff: don’t forget that you can use a Coral TPU with each USB port as well.
A couple of years ago I attempted to use a webcam and coral TPU (based on a UA-cam tute) to use eye-gaze and muscle twitch, to control a mouse … to control a PC as a communication device for my late wife who had ALS. However, a much more sophisticated (and $$$$$) device with integrated software was loaned to her by our state ALS support team before I completed it. However, there's a practical use for AI and a raspberry 5 and AI device for probably 10% of the cost of what they provided on loan.
There are many... My internship back some 7y was research on using SSD512 (YOLO v2(3?) was faster but worse) for machine vision in industrial process inspection, because the price of "all in one" cameras for that field was... eye watering. When a "dumb" model trained in 3 days on an off the shelf GPU on a quite low amount of data coupled with a not fancy "basic" camera totally outperformed the VERY expensive (won't name manufacturer) camera with all it's bells and whistles and a painstakingly hand crafted software solution there was silence in the room. The writing was on the wall.
Guess that's why I'm not all that fazed with the current AI hype train. It's not really news. It's like a DC motor. Once you had them, you just had to keep finding more ways to use them. And getting more data and beefier machines to train on ;)
Did you still have the project on hand? It might be interesting to have that extra level of control. Might even make Tony Stark type holograms possible...
@@ErazerPT I had basic object recognition and (slow) tracking a bit earlier, running on what best could be described as a heterogeneous i2c "network" of ARM chips, there was a kit with bits of it made that one could buy for a while, and then integrate into/onto an OS hacked Lego NXT to stick onto whatever, in my case a sorta robot platform built of Lego combined with PCB's, screws and bits of alu. The end result could chase a specifically colored ball around, or explore an environment, map it, and search for a specific item - item in this case defined as something it could recognize at 'decent fps' which means about 1-2 at least. If it took more than a sec to recognize something it would move on, it mostly did pick up the 'tagged items' in about half a sec or so.
But, the point is that in a funny way my little NN AI Legobot is version 0,001 of the Terminator, looking for its John Connor LOL. I did for a short while program it to try to run over the tagged object in a bid to "terminate" it, but it kept flipping over and the IR sensors of the bot bent out of alignment - so I changed it to just beep instead.
I read your comment and was intrigued. Did you ever develop it into completion?
@@RICHARD_WASD no. I didn't. Caring for my wife took all my available energies at the time.
The killer app for the Co(Pi)lot around Geerling Engineering will be to use pose detection to sound an alarm and activate a water cannon whenever Red Shirt Jeff is about to do something that would release the magic smoke.
Nerf gun with AI target recognition on a pan tilt platform…
This is exactly what AI should be used for, not constantly watching what your doing on your PC and not telling you to eat "at least one small rock per day" 🤣 Glad to see that the RPI foundation is tapping into the best form of AI tools, the ones that are actually helpful and fun, all at a fairly low cost.
This looks great! For my farming robot I’m planning to add eight raspberry Pi HQ cameras to four Raspberry Pi 5’s, and I originally planned to send those feeds over gigabit to an Nvidia Orin. And while centralizing has value, I could perhaps use these with some decentralized processing. Really cool to see raspberry pi focused on real world automation. That’s what I’m doing! I don’t need another chat bot I need computer vision for my farming robot.
Unless you need high FPS, probably the RPI AI will be sufficient
"i dont think a farmer wants a 4090 taking up 600 watts on his tractor." bro you realize a 580 HP diesel tractor is around four hundred THOUSAND watts....lmao
Yeah. It literally wouldn't be a rounding error.
still I don't think they would want a 4090 there
I disagree with your T-Shirt. "/ is the root of all problems".
Nah it's C-Root fo sho.
It's always DNS to blame. Network admins can relate.
"/is the root of all"
In DNS it's .
In your case it’s ~
Looking forward to seeing this tried with the Alftel 12x PCI Express M.2 Carrier Board
I remember when I was first getting the PCIe on the Pi5 going, and everyone was screaming about signal integrity!
Now look at you!
Jeff: "Wildly unsupported configuration"
Jeff also: *daisy chains 5 pineberry M.2 hats*
Good stuff Jeff. Can't wait to see more. Keep up the good work!
Man, calling out that some AI is "a solution in search of a problem" is insanely true. It's unfortunate that many people in tech have that mentality; when they can't think of a good problem to solve they just build something and see if anyone wants to buy it. Thank you for being the internet's voice of reason!
You KNOW we want to see all these on the Alftel board! Would be interesting to see the software support for multiple NPUs in general. Wondering how useful a (future) CM5 based carrier with a PCIe switch and 4-6 m.2 slots for NPUs would be. Would probably run out of RAM or network bandwidth trying to do anything useful, maybe. I think my 8x4MP cameras at 10FPS is only like 55Mbps into Frigate.
I live that that you really stress test the IO on these, of course we want to see them on the big card.
glad to see the tech tubers talking about how we can use our own AI, I love Ai and wish more youtubers talked about the hardware and how to install it. it is a little annoying how people don't undertand how revolutionary AI is compared to something like NFT's that had no practical use case
5:15 Pose Estimation is also useful for building a camera-based FBT setup for VR Apps
without lidar, structured light, or imus, i doubt it would be accurate enough to be useful. If HD lidar and structured light sensors get cheap though they also need AI models to do tracking and this would become useful
Honestly i think probably printing out paper QR codes or something as trackers is probably a good solution, as we're pretty good at tracking those already
Attributing AI to "A solution in search of a problem," is such an accurate statement.
Well, AI can do all manner of things but beyond specific edge cases there isn't much it can do well enough to be worth it. It's often slower and more expensive at doing a bunch of things that it has been tried to be used for. But for example fraud/theft detection by analyzing bank transaction patterns is one thing it has been found to do quite well at that I know of.
This could be said about single board computers as well. Its up to users to figure out what to do with it. That's the fun bit.
Another vote here for whisper running on this, I have whisper , llama3 and dog doing speech -> text -> prompt processing -> text to speech on a 3060 GPU with 8Gb RAM and it works really well, but uses alotta watts. Would be great to run this sort of stack on a pi (or multi pis - still way less power than a PC + 3060)
This would be great for machine vision in 3d printers and maybe even optimizing parameters dynamically
I look forward to seeing the AI development, especially for mobile offline use (I swear im totally not an AI which replied to this video after a minute). P.S. Love the shirt :D
How do I *know* you're not AI, though???
That's what an ai would say! ;)
@@JeffGeerling How do we know it wasnt your AI that posted this comment?
external power 12 slot board is the way to go :)
Titile: "I Built a CoPilot+ AI PC (without Windows)"
My Brain: we trust Jeff... let's watch the video.
Gotta say, the title was frightening. 🙂 It's nice to see someone sane cover this stuff in a meaningful way.
I think Raspberry Pi are aiming for the AI kit to be as much of a teaching tool as they are something useful. One of their newer lesson activities is about training your own model to identify apples vs tomatoes, only they get you to train it on just green apples then ask you why it identifies red apples as tomatoes. I suspect there will be learning material to follow shortly about pose estimation with the AI camera, and running models locally with the AI kit.
I wonder if it would be able to distinguish between muffins and chihuahuas
I appreciate how you labeled most cases ML. For what people are using desktop “ai” for, agreed, it’s just ML.
3:28 that is right. i ran ollama and stable diffusion on my r630 in cpu mode with 44 cores available and 512gb ram over 8 channels and a full bandwith pcie 3.0 x16 nvme.
ollama is relative responsive with 10 to 20sec per answer and stable diffusion is with 8 min per image with 768x768px and a inferens steps of 150 also fast enough for me.
OK! Ordering an AI Kit and a Pi 3 camera. Now, I need to wait until some one takes this Pi AI Kit and programs it for License Plate recognition and storing of a picture/plate in a dB. This will be the last added feature of my "SMART" brick mailbox that I am working on. A camera on each side with an IR light strip behind glass blocks around the top of it. Since I live on a corner lot I get a lot of traffic and the local police come over and ask if they can view my home security camera video for a specific time frame or I send them a video. They say it has been very helpful. With cameras and license plate recognition should be interesting. I do not have to worry about neighbors complaining as they all consider it a positive thing and call to check to see who was at their front door during the day. You have to love technology!
RPi foundation should have done OpenCL support on the GPU before the AI kit. Some applications need just a few TOPs and not being able to use the gpu is such a bummer. It's unused, already paid for hardware. So much for the 'champion' of software support of sbcs.
Hard to believe that they decided not to, given how big a market share is computer vision is for both enthusiasts and industry
11:30 your honesty and willingness to post something like this is a breath of fresh air compared to people posting magic
A video on the bigger, beefier, board would be great 👍
Not useless like R1 or Ai Pin 💀💀💀💀💀💀💀💀💀💀
2:23 "Solution in search of a Problem" - Quote of the year.
If I hear “AI” multiple times in a row I think of “Old MacDonald Had A Farm”…
AI AI OH
Your comment on machine learning was refreshing, highlighting the utility as well as issues :)
This was a Red shirt Jeff project😅. I knew it’s not gonna work, the second I saw the spaghetti of ribbon cables. Wattage was blinking in my mind.
I was going to say I have a security camera making clips of motion already without the need for AI but the example at 4:30 gives a clear example why even this little m.2 ai chip is a huge aid for this stuff, that's quite an improvement in detection already.
last year i worked for a company builder tractor attachment. we made a weeder that used yolov8 with a rtx 2080 ti in an intel nuc to detect plant at high speed. i guess it consumed about 600w but no one cared because all the motors consumed way more. time is money, so speed was way more important than a couple of watts. so yeah, farmers use gpus for machine vision/ai. :)
Farmers are intense!
@@JeffGeerling but the other side is also true. I've got a new project trying to detect eggs with something like the espcam. To do active load balancing on belts to save time en energy. It's all about the right tool for the right job
This is interesting, I've seen a lot of really cool things people are starting to do with home assistant voice assistant using an AI platform. Something like this could feasibly make that completely local, which is where I begin to get excited
To be honest this is better than i guess spending money on nvidia jetson based board yes it may be faster but a raspberry pi is a raspberry pi lots of support and this is awesome thanks for covering this jeff
Nice video. Looking forward to the next try!~
What an incredible pine-tree you have created. I would personally like to see where this is going. It would be great to get a reasonable LLM working well on some sort of an SBC monstrosity for not a ton of money. I really do think the use case would justify the means provided the cost savings is there.
Great video, Jeff! It's actually always useful to find another way something doesn't work (see Edison). This looks like fun to play with. Maybe partner with Electro-Boom? He loves to let the magic smoke out!
Very cool Jeff, I really like your approach to this.
Sure...atm it seems more interesting than the external gpu pursuit
and, I think down to Earth examples of (blank) will be good to learn/get back into I.T.
Thanks, Jeff!
Birds? I like how the Hailo setup using that YOLOv5 model example is somewhat more sure than not that your 4 channel scope on the shelf is a TV set. The dataset is "still a kid" and have only seen a TV...yet! 🙃
Every black box is a TV :D
This is tons of fun. I’d love to see a follow-up on the multi-module board.
I've seen people trying to convert to text just to take advantage of the usual llms, actively training them as a by-product
I just started building one of these! Excited to see your build
5:22 the poses you made had me spitting eggs out with a laugh. Love it.
Man, these videos are awesome for DIYers to nerd over. Jeff... you're the best!!
You reminded me of my day with a ZX-81 kit.
It was such a mess I went for a fully assembled TRS-80 color computer.
@JeffGeerling The Pi does have enough CPU to do detection and just needs to be set up to frigate on overdrive. You can do a low computational alg such as 'Frame Differencing' that is a movement and tracking alg then forward a letterbox centered cut of each object. You don't need to be told its a Cat at 60 FPS as you could even go lower than 1 FPS and you are still tracked and detected.
If you ever fancy kickstarting something rather than a all-in-one such a frigate creating a container system for each algs to forward frames to upstream stream bigger models would allow much more flexability for users to mix and match a cascade of containers that record timebased metadata in some form of SRT file.
Often an alg such as a Yolo is processing full 1080p whilst you could be cutting and scaling based on the output of a low computaional movement & tracking alg
Steve Ballmer - Developers! Developers! Developers! Developers! Developers!
Satya Nadella - AI! AI! AI! AI! AI!
This does look really cool though. I so badly want to replicate that Frigate setup you have Jeff.
4:03 That’s not a tractor, it’s a Combine Harvester, not to be picky 😅
heh, can you tell I'm no farmer??
Perfect brief on the AI hype. I'm positive to it but I was using CoreML in 2017 to get iPads to recognize snow board ramps for MTNDEW. The marketing and laymen love to fluff things up but from vision to some of the agent work now with LLMs, there are actual use cases..which said laymen will latch on a bit too hard too 😅
"It's always good when there's no magic smoke." Is something I'm adopting.
Great video! There's not many out there right now so thank you for making this. I'm quite interested in seeing its performance running LLMs via Ollama. I tried with just a RPi 5 alone, but it takes the vision model LlaVa about 5 whole minutes to interpret an image! I'm hoping the AI kit can improve that vastly. Have you given it a go yet?
The amount of useful info you dump on your videos is amazing. Plus you have to deal with health so brain turn off and some typo is nothing to write home about. always a peasure to get a video from you. You're one of the great content producers of our time.
I fell off my chair LMAO when you said "It's not like the Rabbit" 😂😂😂😂😂
I'm more impressed how you managed to daisy chain PCIE channels rather than the AI stuff
I would love to see more of this type of AI comput options!
A traffic sign recognition would be good for this setup to use in the own car. So you never miss a speed limit sign
Fantastic. Will need to take look to see what documentation Pi Foundation has published, and what frameworks supported. Am curious how Pi Foundation plans to make AI approachable by hobbyists and educators. (perhaps an interview/chat possibility? ;) Awesome shirt BTW.
Yeah; I've followed along with the Hailo-8 for the past year or so, but it was a struggle to get it going at first, then it's been annoying having dev stuff locked behind a registration wall... still hoping that goes away on Hailo's end, but at least the Pi stuff is all open/free to access without registration now!
It seems like a lot of the software side was finally published over the last week, which really made it easier to get things working versus a few months ago, where only a few people on the planet seemed to have the right incantations.
I would like to see you try using Whisper ASR(Voice recognition) with this AI board. It would cool to have voice recognition that doesn't use the internet.
I found your channel from the "SmarterEveryDay" . I love UA-cam channels that they are interesting! and I thank you both! 🌈
Calling out Rabbit and Humane like that...lolz 🙂
Love this channel for showing me pi project ideas
helpful information. Thank you for making these kind of videos.
Planning on grabbing an AI Kit. Need Language Model and Generative AI support on my Raspberry Pi. Gonna be using it as my own home server to replace Github, Google Drive, and the many cords and wonky methods of transfering(or losing) files between devices. I don't need a lot of AI power, but just enough to actually get started. If it takes months to fine tune, and hours for a single image, that's fine. I'm patient.
They do make a Hailo 10h generative ai accelerator that does 40 TOPs but they don't sell it to the general public.
Great work Jeff, your a legend 👍
Thank you so much for this video! I am intending to build an AI Home Waste Separation System, so I find your video so inspiring! Thank you!
In a few years, once the AI Kit (and the PCIe Hat for that matter) are available in Australia, I would love to give it a try!
I live for the Frankenstein setup
What i would love to see are some hardware solutions for Video encode/decode outside of the built in H264, something that works with Plex or Jellyfin for example so that workloads like Sonic Analysis and real time encoding can be offloaded. I know a GPU is obviously an answer, but specifically something like one of these devices where it is not intended to replace the GPU but become a dedicated compute device for crunching the workload, this would be awesome.
I kinda dig the multi-external card approach and wished it was the way forward, it gives us the choice if we want to install NPUs or not and how much of it into our PCs.
I loathe the fact that vendors force it into their CPUs/GPUs, wasting precious die space, driving up cost and taking up power budgets as well especially in laptops.
Imagine a fully fledged Tensor card from Nvidia that you could plug in separately from you GPU, rendering DLSS on a different PCIe slot while making both chips smaller and more efficient!
I'm really pleased to hear your take on AI.
When I was finishing up high school (quite a while ago, now), I had intentions of doing cryptography as a career path. I didn't end up doing that, and I'm kind of glad I didn't; nowadays, when I say "I was very interested in crypto", I have to make sure I add the -graphy to the end.
The same thing is happening to people who have spent their lives working on AI (generally) ML (less, though still fairly, generally). Their entire field is being boiled down to spicy auto complete, and it makes it very difficult to do "real" ML work unless you find some way to incorporate the letters LLM into your grant proposal.
As an aside, as the crypto hype dies down, I'm considering going back and doing a master's in cryptography. Hopefully the folks like me but in AI don't have to wait as long as I have.
Excellent, loved your video but thinking about my requirements. I think I’d probably better off stopping up 200 bucks for plus for the Hailo card. Thank you
Gah. Thanks for using the term machine learning! Drives me nuts with the AI-craziness in the tech world atm. :)
You are a cruel Person ! First I thought I was on ElectroBOOMs channel. I really enjoy watching your experiments, thats the kind of fun i am missing often, thank you !!
Open, locally runnable, user-aligned AI is a cool thing. It seems the only really hard to run ones are LLMs, ironically.
Well, maybe you can get away with simpler LMs since they aren't that large
Dude, my question is, what you cannot do? I mean, you know everything!! (I enjoy your videos, man!)
That is a big thing people seem to forget about NPU's versus GPU's. NPU's are more efficient in so many ways for "AI". Then when you compare how far NPU's have come in a short time versus GPU's that have been around 25+ years, little things like Hailo seem exciting for the future of machine learning. And yes, of course we want to see you make more abominations >:D
I would like a simple AI booster that could host models on-device, that could be used with computers that miss a decent enough NPU. Preferably just plug-and-play via USB, and possible to plugin as server where applications want to hook up to chatgpt, image generation model or whatever it might be. I just would like to see some standardization of the different interfaces so you don't have to have support in every piece of software for whatever you are running.
These are my favourite kind of videos. Keep them coming 🥳🥳🥳
While I don't need or have the cash to burn on something like this, it's great that you show how to do the same stuff without corporate bloatware and other craptastic design flaws. It's also 100% more useful than failed 10yr old wearable tech they shoehorned an LLM into like the Humane "AI Pin". Also, great job on putting more thought and effort into making something useful with AI than either Google or Microsoft are in their designs.
Thanks for sharing! :) :) Great episode.
It's handy to know that Jeff is not 100% a person. CoPilot verified.
I tried Frigate on a Pi 5 with a usb edge tpu and while the detection worked OK, my 6 cameras were killing the Pi cpu with ffmpeg processes. I went back to my little NUC clone which handles it all with cycles to spare.
That was a great mess! I love it, now we need base, (stand) for a PCIe x4 slot with external power to connect the RPi5 to…. More power!
Doctor Geerenstein cobbling up a monster, lol. Next try to run an Nvidia with AI on the pi.
Missed an opportunity to place a square over you with your name as you said "Until next time... I'm Jeff Geerling"
Your collection of coral modules could be called a coral reef. 🪸
This is kinda cool. as much as I dislike all the AI hype and the dystopian things that are being pushed, AI and machine learning can be kinda interesting. And I am not a huge fan of new tech for the most part but it could be made interesting and useful without being intrusive and dystopian. I just wish more people and companies would think things through and implement new tech properly without the bad stuff, but we all know that wont happen.
Next version of frigate will have Hailo support, it is already in the dev builds for 0.15