I've trie drunning OLLAMA on my Raspberry Pi 5, as well as an Intel Celeron based computer, and also an old Intel i7 based computer, and it worked everywhere. It is really behind impressive, thank you for this video to show me how to do it!
I just got a RPi 5 and ran the new Llama 3 (ollama run llama3). I was not expecting it to be this fast for something that is on the level of GPT-3.5 (or above). On a Raspberry Pi. Wow.
could you please give more data about how fast it runs? like tokens per second please I'm trying different models and having a hard time having a natural speed conversation with an LLM on a RPi 5
Thanks for this. So far I've tested TinyLlama, Llama2, and Gemma:2b with the question "Who's on first" ( a baseball reference from a classic Abbott and Costello comedy skit). TinyLlama and Llama2 understood that it was a baseball reference, but had some bizarre ideas on how baseball works. Gemma:2b didn't understand the question but when asked "What is a designated hitter?" came up with an equally incorrect answer.
@@IanWoottenYes, a 3B+. I'm working on a JARVIS that uses ChatGPT API and I'm interested in preforming the AI function locally. That's why I'm looking into Ollama.
Thanks, Ian. Can confirm. It works and is plausible. I am getting about 8-10 minutes for multi-modal image processing with Llava. I find the tiny models to be too dodgy for good responses, and have currently settled on Llama2-uncensored as my go to LLM for the moment. Response times are acceptable, but looking for better performance. (BTW my Pi5 is using an nVME drive and a Hat from Pineberry)
If you want to do a test, let me know. I could run the same model and query as you, and we could compare notes. My guess is that processing time has more to do with CPU and RAM. but not 100% sure. Having said that large (1TB+) nvme makes storing models on the Pi convenient. Also boot times are rather expeditious. When the Pi5 was announced, I knew right away that I wanted to to add an nvme via the PCI express connector. Worth the money, IMO. @@IanWootten
As I just said on the discord server : you might be able to squeeze a (very) tiny bit of performance by not loading the WM and just interact with ollama via SSH. But great that it works as well with tinyllama! Phi based models might work well too! Dolphin-Phi is a 2.7B model.
@BradleyPitts666 meh ... youtube not showing my previous (phone written) answer again, so I can't edit it, and I can't see/edit my previous answer ... so this might be a near identical answer to another answer, sorry. I blame UA-cam :P The Edit is that I disabled even more services, and marginally faster answer. So : WM is the Windows Manager. It uses resources (processor time and memory) while it runs, not a lot, but it's not marginal. So disabling the WM with 'sudo systemctl disable lightdm' and rebooting is beneficial for this particular usecase. Technically ,just calling 'systemctl stop lightdm' would work too, but by disabling and rebooting you make sure any services lightdm started really aren't running in the background. You can then use ollama on the command line. If you want to use it from your main system without hooking the rpi to a monitor and plug a keyboard in it you can enable sshd (the ssh daemon, which isn't enabled by default in the pi-os image afaik) and then ssh to it, and then use ollama there (THAT uses a marginal amount of memory though). I also disabled bluetooth, sound.target and graphical.target, snapd (though I only stop that one, as I need it for nvim), pipewire and pipewire-pulse (those two are disabled using systemctl --user disable pipewire.socket and systemctl --user disable pipewire-pulse.socket). Without any models loaded, at idle, I only have 154MB of memory used. With that configuration tinyllama on the question 'why is the sky blue' I get 13.02 t/s on my rpi5, so nearly 1/3rd faster than with all the unneeded services
I've been testing llamacpp on it and it works great as well. Although, I've had to use my air purifier as a fan to keep it from overheating even with the aftermarket cooling fan/heatsync on it.
How big was the image, maybe that affected the response time? Very cool, although not convinced by tiny-llama or the speed for a 7B model, but still crazy we are getting close. You should try something with more power like a Jetson Nano. THanks!!
I finally got my Pi5 yesterday and already have ollama working with a couple of models. But id like to provide a text to speech for the output on the screen having a hard time wrapping my brain around it how it works... like allowing the Ollama functions from the terminal to turn into audible speech.. but so many resources too pick from and also just getting the code/scripts working, i wish it was easy to install an external package and allow the internal functions to just "work" without having to move files and scripts around it becomes confusing sometimes.
Well, how about a pi-cluster where one node runs ollama and one runs a screen reader ssh'd into the ollama node? Could add another layer and have another node running NLP for the screen reader node, or a series of nodes connected to animatronics and sensors.
The Pi 5 is pretty good when you consider the cost, and what you can do with it. I picked one up recently for Python coding, and it runs Jupyter Notebook beautifully on my 4k screen. I might give the GPIO a whirl at some point in the near future.
Would love to know if the google coral board would provide a substantial improvement. If Ollama can even utilize that. Also, how it would compare to a jetson nano. Nonetheless: Thank you very much for posting this. Chirps to the Birds ❤️
Hi! I have the m.2 VMe hat and I am waiting for my coral accelerator. Does anyone else run with the accelerator and how much does it speed things up? I know what they say it does, but I am interested in real-world figures. I'll post when it arrives from blighty.
Sorry to burst the bubble but Coral will not help. It has only 1G DRAM. It's also limited to Tensorflow Lite AND each model needs to be converted with an edge-tpu-compiler which limites the possible layer types even more.
Been having fun running different LLM. The small ones are fast, the 7B ones are slow. I have Pi5 8G. The small LLMs should run on a Pi4? Tinyllama has trouble adding 2+2. They also seem Monotropic, spiting out random vaguely related answers. I need more Pi5 so I can network a bunch with different LLM on each.
Ollama2 no longer runs on my raspberry pi5 8G, there is error message that models requiring system memory (8.4G) than is availabel(7.4G), can it be resolved or anything I got wrong?
Thanks for video and testing. I was wondering if you have tried setting num_threads =3. I can't find video of where I saw this but I think they set it before calling ollama. Like environment variable. It's supposed to run faster. I'm just building a rpi5 test station now
Great video but it's not a good idea to encourage use of those all-in-one curl commands. Best to download the shell script, ideally look over it before you run it, but even if you don't check it first at least you have the file if something goes wrong
@@IanWootten That's just amazing to me. I have a Pi3, but am planning on upgrading to a pi5. After I saw your video, I downloaded ollama onto my windows pc. It only has 4 GB RAM, but I will still able to run several models!
A Coral module is not suited for this. It lacks the available Ram to really partake in helping an LLM run. What you really need is something like an external GPU, something like one of those ADLINK Pocket AI GPUs to hook up to the system, BUT it only has 4GB Vram.
I am working on something similar, but using a Pi4 for STT & TTS (and animatronics) and a dedicated Pi5 for running the LLM with Ollama like Ian demonstrates. They are on the same network and use MQTT for communication protocol. This is for robotics project.@@IanWootten
@@isuckatthat Yes not easy. I was trying to implement speech with Google Wavenet using UA-cam Data Slayer example. I put the key reference in pi's user.profile as export. Script runs okay now creating the mp3 files but no speech so pretty much gave up as other fish to fry.
The coral TPU isn't suited for this, it lacks the available Ram to do any good with an LLM. What you'd need is one of those ADLINK Pocket AI GPUs but it only has 4GB Vram.
All the text model responses are in realtime. I've only made edits when using llava since there was a 5 min delay between hitting enter and it responding...
tryin, 10 turing pi's with 40 nvidia jetson orin's on them, with primary linux computer, so 41 systems total, clustered using archer and kubermites. it will def rocket then... 4,000-6,000 tops, 1.2 terabytes of ram.
maybe better try genius Mozilla LLM container in one file project LLAMAFILE. I was able to run it on my 2011 laptop(some ancient Gpu) with Windows 8 a LLAVA in llamafile, which is also an image scanner llm. Ollama i've tested can't run on win 8.
There's lot of videos of people running it on their PC, but if you use it all the time it will hog your PC all the time. There's several reasons you might want a dedicated host.
I appreciate you showing the actual inference speed, other videos usually cut around it, making it unrealistically fast.
I've trie drunning OLLAMA on my Raspberry Pi 5, as well as an Intel Celeron based computer, and also an old Intel i7 based computer, and it worked everywhere. It is really behind impressive, thank you for this video to show me how to do it!
what do youb mean by behind impressive
Extremely slow i guess @@DorukMergan-hq7jo
I just got a RPi 5 and ran the new Llama 3 (ollama run llama3).
I was not expecting it to be this fast for something that is on the level of GPT-3.5 (or above). On a Raspberry Pi. Wow.
I just recieved my pi, and I'm about to do the same thing.. Are you doing anything else on it?
could you please give more data about how fast it runs? like tokens per second please
I'm trying different models and having a hard time having a natural speed conversation with an LLM on a RPi 5
Tinyllama is not on the level of GPT-3.5, let's be realistic...
Thanks for this. So far I've tested TinyLlama, Llama2, and Gemma:2b with the question "Who's on first" ( a baseball reference from a classic Abbott and Costello comedy skit). TinyLlama and Llama2 understood that it was a baseball reference, but had some bizarre ideas on how baseball works. Gemma:2b didn't understand the question but when asked "What is a designated hitter?" came up with an equally incorrect answer.
Nice. I love your Hal replica. Was that done with a Raspberry Pi?
@@IanWoottenYes, a 3B+. I'm working on a JARVIS that uses ChatGPT API and I'm interested in preforming the AI function locally. That's why I'm looking into Ollama.
Thank u for sharing this. I am blown away.
Thank you for making this video. I've always wondered how these LLMs perform on low-end devices. Obviously, not great, but this looks promising!
Thanks, Ian. Can confirm. It works and is plausible. I am getting about 8-10 minutes for multi-modal image processing with Llava. I find the tiny models to be too dodgy for good responses, and have currently settled on Llama2-uncensored as my go to LLM for the moment. Response times are acceptable, but looking for better performance. (BTW my Pi5 is using an nVME drive and a Hat from Pineberry)
Nice, I'd like to compare to see how much faster an nVME would run these models.
If you want to do a test, let me know. I could run the same model and query as you, and we could compare notes. My guess is that processing time has more to do with CPU and RAM. but not 100% sure. Having said that large (1TB+) nvme makes storing models on the Pi convenient. Also boot times are rather expeditious. When the Pi5 was announced, I knew right away that I wanted to to add an nvme via the PCI express connector. Worth the money, IMO. @@IanWootten
As I just said on the discord server : you might be able to squeeze a (very) tiny bit of performance by not loading the WM and just interact with ollama via SSH. But great that it works as well with tinyllama! Phi based models might work well too! Dolphin-Phi is a 2.7B model.
I don't follow? What VM? ssh into what?
@@BradleyPitts666 WM ... windows Manager.
@BradleyPitts666 meh ... youtube not showing my previous (phone written) answer again, so I can't edit it, and I can't see/edit my previous answer ... so this might be a near identical answer to another answer, sorry. I blame UA-cam :P
The Edit is that I disabled even more services, and marginally faster answer.
So : WM is the Windows Manager. It uses resources (processor time and memory) while it runs, not a lot, but it's not marginal. So disabling the WM with 'sudo systemctl disable lightdm' and rebooting is beneficial for this particular usecase. Technically ,just calling 'systemctl stop lightdm' would work too, but by disabling and rebooting you make sure any services lightdm started really aren't running in the background. You can then use ollama on the command line.
If you want to use it from your main system without hooking the rpi to a monitor and plug a keyboard in it you can enable sshd (the ssh daemon, which isn't enabled by default in the pi-os image afaik) and then ssh to it, and then use ollama there (THAT uses a marginal amount of memory though). I also disabled bluetooth, sound.target and graphical.target, snapd (though I only stop that one, as I need it for nvim), pipewire and pipewire-pulse (those two are disabled using systemctl --user disable pipewire.socket and systemctl --user disable pipewire-pulse.socket).
Without any models loaded, at idle, I only have 154MB of memory used.
With that configuration tinyllama on the question 'why is the sky blue' I get 13.02 t/s on my rpi5, so nearly 1/3rd faster than with all the unneeded services
What about the new ai attachment that they announced for the pi to do ai stuff. Would this work faster?
@@DominequeTaylor as far as I know it's for visual recognition, not for llms
Such a calm tutorial but so informative💙
I've been testing llamacpp on it and it works great as well. Although, I've had to use my air purifier as a fan to keep it from overheating even with the aftermarket cooling fan/heatsync on it.
How big was the image, maybe that affected the response time? Very cool, although not convinced by tiny-llama or the speed for a 7B model, but still crazy we are getting close. You should try something with more power like a Jetson Nano. THanks!!
Less than 400KB. Might try a jetson nano if I get my hands on one.
Might be worth trying the quantised versions of llama2
How do we run this in python, so for voice to text and text to speech for a voice assistant
I finally got my Pi5 yesterday and already have ollama working with a couple of models. But id like to provide a text to speech for the output on the screen having a hard time wrapping my brain around it how it works... like allowing the Ollama functions from the terminal to turn into audible speech.. but so many resources too pick from and also just getting the code/scripts working, i wish it was easy to install an external package and allow the internal functions to just "work" without having to move files and scripts around it becomes confusing sometimes.
Well, how about a pi-cluster where one node runs ollama and one runs a screen reader ssh'd into the ollama node? Could add another layer and have another node running NLP for the screen reader node, or a series of nodes connected to animatronics and sensors.
You can run meta whisper alongside your model from what I read. t-t-s and s-t-t
Will the performance improve by adding AI accelerator like hailo 8
Can Ollama benefit from an NPU hat? The Hailo 26 TOPS for instance.
In the USA Digilent also has many Raspberys5 available!
The Pi 5 is pretty good when you consider the cost, and what you can do with it. I picked one up recently for Python coding, and it runs Jupyter Notebook beautifully on my 4k screen. I might give the GPIO a whirl at some point in the near future.
I am considering buying a Pi 5 but confused between the 4gb vs 8gb models. Any suggestions?
@@vedantdalvi7523 - I have the 8GB model, no experience with the 4GB one…sorry I can’t help.
Thanks for the video! What's your camera please ?
Would love to know if the google coral board would provide a substantial improvement. If Ollama can even utilize that. Also, how it would compare to a jetson nano. Nonetheless: Thank you very much for posting this. Chirps to the Birds ❤️
That would be great to try out if I could get my hands on one.
could the compute process be distributed, like a grid compute? 4 raspberry pi?
Not really - a model file is downloaded to the machine using Ollama and brought into memory.
Really useful stuff on your videos. Subscribed 👍
Hi! I have the m.2 VMe hat and I am waiting for my coral accelerator. Does anyone else run with the accelerator and how much does it speed things up? I know what they say it does, but I am interested in real-world figures. I'll post when it arrives from blighty.
Sorry to burst the bubble but Coral will not help. It has only 1G DRAM. It's also limited to Tensorflow Lite AND each model needs to be converted with an edge-tpu-compiler which limites the possible layer types even more.
Been having fun running different LLM. The small ones are fast, the 7B ones are slow. I have Pi5 8G. The small LLMs should run on a Pi4? Tinyllama has trouble adding 2+2. They also seem Monotropic, spiting out random vaguely related answers. I need more Pi5 so I can network a bunch with different LLM on each.
Ollama2 no longer runs on my raspberry pi5 8G, there is error message that models requiring system memory (8.4G) than is availabel(7.4G), can it be resolved or anything I got wrong?
Thanks for video and testing. I was wondering if you have tried setting num_threads =3. I can't find video of where I saw this but I think they set it before calling ollama. Like environment variable. It's supposed to run faster. I'm just building a rpi5 test station now
I wonder if this is compatible with the AI kit you can add to the pi5 for many times faster AI in other tasks
If you get a tpu could you run a bigger llm and even more efficiently?
I don't think so, since Ollama isn't built to leverage it.
Was this pi 5 consisted of microsd card or external storage?
How big the storage size is suitable?
Just using the microsd. I'd imagine speeds would be a fair bit better from USB or nvme.
This is a good start - I bet the Raspberry Pi makers have a Pi 6 in the works with a better GPU to really drive these LLM's.
No doubt they will do. But, the Pi 4 was released 4 years ago, so you might have to wait a while.
That's wishful thinking.
You might as well try to figure out how to run an ADLINK Pocket AI on a Pi 5.
Great video but it's not a good idea to encourage use of those all-in-one curl commands. Best to download the shell script, ideally look over it before you run it, but even if you don't check it first at least you have the file if something goes wrong
Yes, I've mentioned this in my other videos and have in my blog on this too.
@@IanWootten ah, sorry hadn't seen that. Anyway thanks again for the video! I've subscribed to your channel as looks great 🙂
Awesome! I want to try this now! Can someone tell me if it necessary to install the model on an exterior SSD?
Not necessary, but may be faster. All the experiments here I was just using a microsd.
@@IanWootten That's just amazing to me. I have a Pi3, but am planning on upgrading to a pi5. After I saw your video, I downloaded ollama onto my windows pc. It only has 4 GB RAM, but I will still able to run several models!
please try adding a coral module to the pi and see the difference
A Coral module is not suited for this. It lacks the available Ram to really partake in helping an LLM run.
What you really need is something like an external GPU, something like one of those ADLINK Pocket AI GPUs to hook up to the system, BUT it only has 4GB Vram.
can you connect it with speach recognition and make tts output with pretrained voicemodel (*.index and *.pth) file?
You probably could, but it wouldn't give a quick enough response for something like a conversation.
I am working on something similar, but using a Pi4 for STT & TTS (and animatronics) and a dedicated Pi5 for running the LLM with Ollama like Ian demonstrates. They are on the same network and use MQTT for communication protocol. This is for robotics project.@@IanWootten
I've been trying to do this, but its impossibly hard to get tts setup.
@@isuckatthat Yes not easy. I was trying to implement speech with Google Wavenet using UA-cam Data Slayer example. I put the key reference in pi's user.profile as export. Script runs okay now creating the mp3 files but no speech so pretty much gave up as other fish to fry.
@@isuckatthathave you tried espeak? It would give robotic quality output but uses very little processing and works fine on a Pi
Is possible to run in a Raspberry Pi 3?
Maybe one of the smaller models, but it'll run a lot slower than here
This is so exciting! Can you pair this with a Coral TPU ? and then check the inference speed ? I was wondering if that's possible
The coral TPU isn't suited for this, it lacks the available Ram to do any good with an LLM. What you'd need is one of those ADLINK Pocket AI GPUs but it only has 4GB Vram.
@@madmax2069 makes sense.
What about renew this video with the new Rpi Hat AI? Thanks
Could do, but I don't think Ollama would be able to leverage it, plus it's not out yet.
is that realtime? is that how fast it replies?
All the text model responses are in realtime. I've only made edits when using llava since there was a 5 min delay between hitting enter and it responding...
also try MS Phi2 for Python and Gemma-2b
tryin, 10 turing pi's with 40 nvidia jetson orin's on them, with primary linux computer, so 41 systems total, clustered using archer and kubermites. it will def rocket then... 4,000-6,000 tops, 1.2 terabytes of ram.
That sounds crazy
now lets try it on a cluster or ampere altra?
Happy to give it a try if there's one going spare!
wow that's even faster than my pc idk how
Am I the only one who noticed tiny llama response to “why sky is blue?” was shitty? What the heck rust causing blue color of the sky?
Others have mentioned it in the comments too. It is a much smaller model, but there are many others to choose from (albeit possibly slower).
What if you put a compute module on it or something?
A compute module is a RPi in a slightly different form. So I think it would behave the same.
that's super impressive even if it takes quite a while to respond, is a RASPBERRY PI
EXACTLY!
maybe better try genius Mozilla LLM container in one file project LLAMAFILE. I was able to run it on my 2011 laptop(some ancient Gpu) with Windows 8 a LLAVA in llamafile, which is also an image scanner llm. Ollama i've tested can't run on win 8.
I'm curious why run it on a Pi instead of a proper PC?
To satisfy my curiosity - to see whether it's technically possible on such a low powered, cheap machine.
thanks for the genuine response :D Yes I can see that drive now. @@IanWootten
There's lot of videos of people running it on their PC, but if you use it all the time it will hog your PC all the time. There's several reasons you might want a dedicated host.
What’s the point of running a LLM locally if the responses are going to be nonsense? That blue sky response was ridiculous.
The response for that one model/prompt may have been, but there are plenty of others to choose from.
the model i have found works fastest is qwen 2 0.5b. the only downside is it thinks trump is a night and pope
I liked tinydolphin better than tinyllama.
Not tried it out yet.
I dunno. It seems cheaper for buy actual second-hand GPU to run Ollama on it than to buy RPi. [Partially] a joke. :)
Possibly if you already have a machine. This might work out if you don't. Power consumption is next to nothing on the Pi too.
Anyone else read the title as Obama
This is LLMs running on CPU... yikes
I can't understand language. Speak English like a real American.
:)
I'll have you know I speak the Queen's English.
Too expensive for what it is. Interesting proof of concept but absolutely useless and inefficient in a production context
I have cpu usage at 380% when ollama2 responding. Anyone else tested?