Using Ollama to Run Local LLMs on the Raspberry Pi 5

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 113

  • @pafnutiytheartist
    @pafnutiytheartist 3 місяці тому +10

    I appreciate you showing the actual inference speed, other videos usually cut around it, making it unrealistically fast.

  • @sweetbb125
    @sweetbb125 7 місяців тому +11

    I've trie drunning OLLAMA on my Raspberry Pi 5, as well as an Intel Celeron based computer, and also an old Intel i7 based computer, and it worked everywhere. It is really behind impressive, thank you for this video to show me how to do it!

    • @DorukMergan-hq7jo
      @DorukMergan-hq7jo 4 місяці тому

      what do youb mean by behind impressive

    • @kewa_design
      @kewa_design 3 місяці тому

      Extremely slow i guess ​@@DorukMergan-hq7jo

  • @metacob
    @metacob 8 місяців тому +38

    I just got a RPi 5 and ran the new Llama 3 (ollama run llama3).
    I was not expecting it to be this fast for something that is on the level of GPT-3.5 (or above). On a Raspberry Pi. Wow.

    • @brando2818
      @brando2818 7 місяців тому

      I just recieved my pi, and I'm about to do the same thing.. Are you doing anything else on it?

    • @zayedc7348
      @zayedc7348 3 місяці тому

      could you please give more data about how fast it runs? like tokens per second please
      I'm trying different models and having a hard time having a natural speed conversation with an LLM on a RPi 5

    • @flrn84791
      @flrn84791 Місяць тому

      Tinyllama is not on the level of GPT-3.5, let's be realistic...

  • @BillYovino
    @BillYovino 10 місяців тому +3

    Thanks for this. So far I've tested TinyLlama, Llama2, and Gemma:2b with the question "Who's on first" ( a baseball reference from a classic Abbott and Costello comedy skit). TinyLlama and Llama2 understood that it was a baseball reference, but had some bizarre ideas on how baseball works. Gemma:2b didn't understand the question but when asked "What is a designated hitter?" came up with an equally incorrect answer.

    • @IanWootten
      @IanWootten  10 місяців тому

      Nice. I love your Hal replica. Was that done with a Raspberry Pi?

    • @BillYovino
      @BillYovino 10 місяців тому

      @@IanWoottenYes, a 3B+. I'm working on a JARVIS that uses ChatGPT API and I'm interested in preforming the AI function locally. That's why I'm looking into Ollama.

  • @KDG860
    @KDG860 9 місяців тому +3

    Thank u for sharing this. I am blown away.

  • @Flash136
    @Flash136 2 місяці тому +1

    Thank you for making this video. I've always wondered how these LLMs perform on low-end devices. Obviously, not great, but this looks promising!

  • @whitneydesignlabs8738
    @whitneydesignlabs8738 11 місяців тому +8

    Thanks, Ian. Can confirm. It works and is plausible. I am getting about 8-10 minutes for multi-modal image processing with Llava. I find the tiny models to be too dodgy for good responses, and have currently settled on Llama2-uncensored as my go to LLM for the moment. Response times are acceptable, but looking for better performance. (BTW my Pi5 is using an nVME drive and a Hat from Pineberry)

    • @IanWootten
      @IanWootten  11 місяців тому +2

      Nice, I'd like to compare to see how much faster an nVME would run these models.

    • @whitneydesignlabs8738
      @whitneydesignlabs8738 11 місяців тому +4

      If you want to do a test, let me know. I could run the same model and query as you, and we could compare notes. My guess is that processing time has more to do with CPU and RAM. but not 100% sure. Having said that large (1TB+) nvme makes storing models on the Pi convenient. Also boot times are rather expeditious. When the Pi5 was announced, I knew right away that I wanted to to add an nvme via the PCI express connector. Worth the money, IMO. @@IanWootten

  • @SocialNetwooky
    @SocialNetwooky 11 місяців тому +24

    As I just said on the discord server : you might be able to squeeze a (very) tiny bit of performance by not loading the WM and just interact with ollama via SSH. But great that it works as well with tinyllama! Phi based models might work well too! Dolphin-Phi is a 2.7B model.

    • @BradleyPitts666
      @BradleyPitts666 10 місяців тому

      I don't follow? What VM? ssh into what?

    • @SocialNetwooky
      @SocialNetwooky 10 місяців тому

      @@BradleyPitts666 WM ... windows Manager.

    • @SocialNetwooky
      @SocialNetwooky 10 місяців тому +6

      ​ @BradleyPitts666 meh ... youtube not showing my previous (phone written) answer again, so I can't edit it, and I can't see/edit my previous answer ... so this might be a near identical answer to another answer, sorry. I blame UA-cam :P
      The Edit is that I disabled even more services, and marginally faster answer.
      So : WM is the Windows Manager. It uses resources (processor time and memory) while it runs, not a lot, but it's not marginal. So disabling the WM with 'sudo systemctl disable lightdm' and rebooting is beneficial for this particular usecase. Technically ,just calling 'systemctl stop lightdm' would work too, but by disabling and rebooting you make sure any services lightdm started really aren't running in the background. You can then use ollama on the command line.
      If you want to use it from your main system without hooking the rpi to a monitor and plug a keyboard in it you can enable sshd (the ssh daemon, which isn't enabled by default in the pi-os image afaik) and then ssh to it, and then use ollama there (THAT uses a marginal amount of memory though). I also disabled bluetooth, sound.target and graphical.target, snapd (though I only stop that one, as I need it for nvim), pipewire and pipewire-pulse (those two are disabled using systemctl --user disable pipewire.socket and systemctl --user disable pipewire-pulse.socket).
      Without any models loaded, at idle, I only have 154MB of memory used.
      With that configuration tinyllama on the question 'why is the sky blue' I get 13.02 t/s on my rpi5, so nearly 1/3rd faster than with all the unneeded services

    • @DominequeTaylor
      @DominequeTaylor 6 місяців тому

      What about the new ai attachment that they announced for the pi to do ai stuff. Would this work faster?

    • @SocialNetwooky
      @SocialNetwooky 6 місяців тому

      @@DominequeTaylor as far as I know it's for visual recognition, not for llms

  • @nilutpolsrobolab
    @nilutpolsrobolab 8 місяців тому +3

    Such a calm tutorial but so informative💙

  • @isuckatthat
    @isuckatthat 11 місяців тому +3

    I've been testing llamacpp on it and it works great as well. Although, I've had to use my air purifier as a fan to keep it from overheating even with the aftermarket cooling fan/heatsync on it.

  • @allurbase
    @allurbase 11 місяців тому +1

    How big was the image, maybe that affected the response time? Very cool, although not convinced by tiny-llama or the speed for a 7B model, but still crazy we are getting close. You should try something with more power like a Jetson Nano. THanks!!

    • @IanWootten
      @IanWootten  11 місяців тому

      Less than 400KB. Might try a jetson nano if I get my hands on one.

  • @MarkSze
    @MarkSze 11 місяців тому +4

    Might be worth trying the quantised versions of llama2

  • @Augmented_AI
    @Augmented_AI 10 місяців тому +5

    How do we run this in python, so for voice to text and text to speech for a voice assistant

  • @BenAulbrook
    @BenAulbrook 11 місяців тому +3

    I finally got my Pi5 yesterday and already have ollama working with a couple of models. But id like to provide a text to speech for the output on the screen having a hard time wrapping my brain around it how it works... like allowing the Ollama functions from the terminal to turn into audible speech.. but so many resources too pick from and also just getting the code/scripts working, i wish it was easy to install an external package and allow the internal functions to just "work" without having to move files and scripts around it becomes confusing sometimes.

    • @Wolkebuch99
      @Wolkebuch99 11 місяців тому

      Well, how about a pi-cluster where one node runs ollama and one runs a screen reader ssh'd into the ollama node? Could add another layer and have another node running NLP for the screen reader node, or a series of nodes connected to animatronics and sensors.

    • @davidkisielewski605
      @davidkisielewski605 10 місяців тому +2

      You can run meta whisper alongside your model from what I read. t-t-s and s-t-t

  • @vishwanathasardeshpande7092
    @vishwanathasardeshpande7092 5 місяців тому +1

    Will the performance improve by adding AI accelerator like hailo 8

  • @HifiCentret
    @HifiCentret Місяць тому

    Can Ollama benefit from an NPU hat? The Hailo 26 TOPS for instance.

  • @Vhbaske
    @Vhbaske 8 місяців тому

    In the USA Digilent also has many Raspberys5 available!

  • @daveys
    @daveys 9 місяців тому +1

    The Pi 5 is pretty good when you consider the cost, and what you can do with it. I picked one up recently for Python coding, and it runs Jupyter Notebook beautifully on my 4k screen. I might give the GPIO a whirl at some point in the near future.

    • @vedantdalvi7523
      @vedantdalvi7523 5 місяців тому

      I am considering buying a Pi 5 but confused between the 4gb vs 8gb models. Any suggestions?

    • @daveys
      @daveys 5 місяців тому

      @@vedantdalvi7523 - I have the 8GB model, no experience with the 4GB one…sorry I can’t help.

  • @m41ek
    @m41ek 11 місяців тому +1

    Thanks for the video! What's your camera please ?

  • @dinoscheidt
    @dinoscheidt 11 місяців тому +4

    Would love to know if the google coral board would provide a substantial improvement. If Ollama can even utilize that. Also, how it would compare to a jetson nano. Nonetheless: Thank you very much for posting this. Chirps to the Birds ❤️

    • @IanWootten
      @IanWootten  11 місяців тому +1

      That would be great to try out if I could get my hands on one.

  • @1091tube
    @1091tube 10 місяців тому +1

    could the compute process be distributed, like a grid compute? 4 raspberry pi?

    • @IanWootten
      @IanWootten  10 місяців тому

      Not really - a model file is downloaded to the machine using Ollama and brought into memory.

  • @markr9640
    @markr9640 10 місяців тому

    Really useful stuff on your videos. Subscribed 👍

  • @davidkisielewski605
    @davidkisielewski605 10 місяців тому +3

    Hi! I have the m.2 VMe hat and I am waiting for my coral accelerator. Does anyone else run with the accelerator and how much does it speed things up? I know what they say it does, but I am interested in real-world figures. I'll post when it arrives from blighty.

    • @eMgotcha77
      @eMgotcha77 5 місяців тому

      Sorry to burst the bubble but Coral will not help. It has only 1G DRAM. It's also limited to Tensorflow Lite AND each model needs to be converted with an edge-tpu-compiler which limites the possible layer types even more.

  • @AlwaysCensored-xp1be
    @AlwaysCensored-xp1be 9 місяців тому

    Been having fun running different LLM. The small ones are fast, the 7B ones are slow. I have Pi5 8G. The small LLMs should run on a Pi4? Tinyllama has trouble adding 2+2. They also seem Monotropic, spiting out random vaguely related answers. I need more Pi5 so I can network a bunch with different LLM on each.

  • @ray-charc3131
    @ray-charc3131 Місяць тому

    Ollama2 no longer runs on my raspberry pi5 8G, there is error message that models requiring system memory (8.4G) than is availabel(7.4G), can it be resolved or anything I got wrong?

  • @donmitchinson3611
    @donmitchinson3611 7 місяців тому

    Thanks for video and testing. I was wondering if you have tried setting num_threads =3. I can't find video of where I saw this but I think they set it before calling ollama. Like environment variable. It's supposed to run faster. I'm just building a rpi5 test station now

  • @alexmcd378
    @alexmcd378 3 місяці тому

    I wonder if this is compatible with the AI kit you can add to the pi5 for many times faster AI in other tasks

  • @razvanstroe7044
    @razvanstroe7044 4 місяці тому

    If you get a tpu could you run a bigger llm and even more efficiently?

    • @IanWootten
      @IanWootten  4 місяці тому

      I don't think so, since Ollama isn't built to leverage it.

  • @Lp-ze1tg
    @Lp-ze1tg 9 місяців тому

    Was this pi 5 consisted of microsd card or external storage?
    How big the storage size is suitable?

    • @IanWootten
      @IanWootten  9 місяців тому

      Just using the microsd. I'd imagine speeds would be a fair bit better from USB or nvme.

  • @Marco916SacTown
    @Marco916SacTown 11 місяців тому +2

    This is a good start - I bet the Raspberry Pi makers have a Pi 6 in the works with a better GPU to really drive these LLM's.

    • @IanWootten
      @IanWootten  11 місяців тому +3

      No doubt they will do. But, the Pi 4 was released 4 years ago, so you might have to wait a while.

    • @madmax2069
      @madmax2069 10 місяців тому

      That's wishful thinking.
      You might as well try to figure out how to run an ADLINK Pocket AI on a Pi 5.

  • @nmstoker
    @nmstoker 11 місяців тому +2

    Great video but it's not a good idea to encourage use of those all-in-one curl commands. Best to download the shell script, ideally look over it before you run it, but even if you don't check it first at least you have the file if something goes wrong

    • @IanWootten
      @IanWootten  11 місяців тому

      Yes, I've mentioned this in my other videos and have in my blog on this too.

    • @nmstoker
      @nmstoker 11 місяців тому

      @@IanWootten ah, sorry hadn't seen that. Anyway thanks again for the video! I've subscribed to your channel as looks great 🙂

  • @technocorpus1
    @technocorpus1 8 місяців тому

    Awesome! I want to try this now! Can someone tell me if it necessary to install the model on an exterior SSD?

    • @IanWootten
      @IanWootten  8 місяців тому

      Not necessary, but may be faster. All the experiments here I was just using a microsd.

    • @technocorpus1
      @technocorpus1 8 місяців тому

      ​@@IanWootten That's just amazing to me. I have a Pi3, but am planning on upgrading to a pi5. After I saw your video, I downloaded ollama onto my windows pc. It only has 4 GB RAM, but I will still able to run several models!

  • @user-vl4vo2vz4f
    @user-vl4vo2vz4f 10 місяців тому +4

    please try adding a coral module to the pi and see the difference

    • @madmax2069
      @madmax2069 10 місяців тому

      A Coral module is not suited for this. It lacks the available Ram to really partake in helping an LLM run.
      What you really need is something like an external GPU, something like one of those ADLINK Pocket AI GPUs to hook up to the system, BUT it only has 4GB Vram.

  • @Bigjuergo
    @Bigjuergo 11 місяців тому

    can you connect it with speach recognition and make tts output with pretrained voicemodel (*.index and *.pth) file?

    • @IanWootten
      @IanWootten  11 місяців тому

      You probably could, but it wouldn't give a quick enough response for something like a conversation.

    • @whitneydesignlabs8738
      @whitneydesignlabs8738 11 місяців тому

      I am working on something similar, but using a Pi4 for STT & TTS (and animatronics) and a dedicated Pi5 for running the LLM with Ollama like Ian demonstrates. They are on the same network and use MQTT for communication protocol. This is for robotics project.@@IanWootten

    • @isuckatthat
      @isuckatthat 11 місяців тому

      I've been trying to do this, but its impossibly hard to get tts setup.

    • @donniealfonso7100
      @donniealfonso7100 11 місяців тому

      @@isuckatthat Yes not easy. I was trying to implement speech with Google Wavenet using UA-cam Data Slayer example. I put the key reference in pi's user.profile as export. Script runs okay now creating the mp3 files but no speech so pretty much gave up as other fish to fry.

    • @nmstoker
      @nmstoker 11 місяців тому +1

      ​@@isuckatthathave you tried espeak? It would give robotic quality output but uses very little processing and works fine on a Pi

  • @GuillermoTs
    @GuillermoTs 9 місяців тому

    Is possible to run in a Raspberry Pi 3?

    • @IanWootten
      @IanWootten  9 місяців тому

      Maybe one of the smaller models, but it'll run a lot slower than here

  • @arkaprovobhattacharjee8691
    @arkaprovobhattacharjee8691 11 місяців тому +3

    This is so exciting! Can you pair this with a Coral TPU ? and then check the inference speed ? I was wondering if that's possible

    • @madmax2069
      @madmax2069 10 місяців тому +1

      The coral TPU isn't suited for this, it lacks the available Ram to do any good with an LLM. What you'd need is one of those ADLINK Pocket AI GPUs but it only has 4GB Vram.

    • @arkaprovobhattacharjee8691
      @arkaprovobhattacharjee8691 10 місяців тому

      @@madmax2069 makes sense.

  • @galdakaMusic
    @galdakaMusic 6 місяців тому

    What about renew this video with the new Rpi Hat AI? Thanks

    • @IanWootten
      @IanWootten  6 місяців тому +1

      Could do, but I don't think Ollama would be able to leverage it, plus it's not out yet.

  • @anonymously-rex-cole
    @anonymously-rex-cole 9 місяців тому

    is that realtime? is that how fast it replies?

    • @IanWootten
      @IanWootten  9 місяців тому +2

      All the text model responses are in realtime. I've only made edits when using llava since there was a 5 min delay between hitting enter and it responding...

  • @dibu28
    @dibu28 8 місяців тому

    also try MS Phi2 for Python and Gemma-2b

  • @ioaircraft
    @ioaircraft 5 місяців тому

    tryin, 10 turing pi's with 40 nvidia jetson orin's on them, with primary linux computer, so 41 systems total, clustered using archer and kubermites. it will def rocket then... 4,000-6,000 tops, 1.2 terabytes of ram.

  • @chetana9802
    @chetana9802 11 місяців тому

    now lets try it on a cluster or ampere altra?

    • @IanWootten
      @IanWootten  11 місяців тому

      Happy to give it a try if there's one going spare!

  • @theguywhoasked0.0
    @theguywhoasked0.0 2 місяці тому

    wow that's even faster than my pc idk how

  • @AlexanderGriaznov
    @AlexanderGriaznov 8 місяців тому +1

    Am I the only one who noticed tiny llama response to “why sky is blue?” was shitty? What the heck rust causing blue color of the sky?

    • @IanWootten
      @IanWootten  8 місяців тому

      Others have mentioned it in the comments too. It is a much smaller model, but there are many others to choose from (albeit possibly slower).

  • @TreeLuvBurdpu
    @TreeLuvBurdpu 10 місяців тому

    What if you put a compute module on it or something?

    • @IanWootten
      @IanWootten  10 місяців тому +1

      A compute module is a RPi in a slightly different form. So I think it would behave the same.

  • @NicolasSilvaVasault
    @NicolasSilvaVasault 7 місяців тому

    that's super impressive even if it takes quite a while to respond, is a RASPBERRY PI

  • @fontenbleau
    @fontenbleau 11 місяців тому

    maybe better try genius Mozilla LLM container in one file project LLAMAFILE. I was able to run it on my 2011 laptop(some ancient Gpu) with Windows 8 a LLAVA in llamafile, which is also an image scanner llm. Ollama i've tested can't run on win 8.

  • @zachhoy
    @zachhoy 11 місяців тому +1

    I'm curious why run it on a Pi instead of a proper PC?

    • @IanWootten
      @IanWootten  11 місяців тому +6

      To satisfy my curiosity - to see whether it's technically possible on such a low powered, cheap machine.

    • @zachhoy
      @zachhoy 11 місяців тому +1

      thanks for the genuine response :D Yes I can see that drive now. @@IanWootten

    • @TreeLuvBurdpu
      @TreeLuvBurdpu 10 місяців тому

      There's lot of videos of people running it on their PC, but if you use it all the time it will hog your PC all the time. There's several reasons you might want a dedicated host.

  • @marsrocket
    @marsrocket 9 місяців тому

    What’s the point of running a LLM locally if the responses are going to be nonsense? That blue sky response was ridiculous.

    • @IanWootten
      @IanWootten  9 місяців тому +1

      The response for that one model/prompt may have been, but there are plenty of others to choose from.

  • @realSkyfr
    @realSkyfr 4 місяці тому +1

    the model i have found works fastest is qwen 2 0.5b. the only downside is it thinks trump is a night and pope

  • @Tarbard
    @Tarbard 11 місяців тому +1

    I liked tinydolphin better than tinyllama.

    • @IanWootten
      @IanWootten  11 місяців тому

      Not tried it out yet.

  • @pengain4
    @pengain4 8 місяців тому

    I dunno. It seems cheaper for buy actual second-hand GPU to run Ollama on it than to buy RPi. [Partially] a joke. :)

    • @IanWootten
      @IanWootten  8 місяців тому +1

      Possibly if you already have a machine. This might work out if you don't. Power consumption is next to nothing on the Pi too.

  • @therealwhite
    @therealwhite 3 місяці тому

    Anyone else read the title as Obama

  • @flrn84791
    @flrn84791 Місяць тому

    This is LLMs running on CPU... yikes

  • @c-LAW
    @c-LAW 2 місяці тому

    I can't understand language. Speak English like a real American.
    :)

    • @IanWootten
      @IanWootten  2 місяці тому

      I'll have you know I speak the Queen's English.

  • @blender_wiki
    @blender_wiki 11 місяців тому

    Too expensive for what it is. Interesting proof of concept but absolutely useless and inefficient in a production context

  • @BradleyPitts666
    @BradleyPitts666 10 місяців тому +3

    I have cpu usage at 380% when ollama2 responding. Anyone else tested?