Free and Local AI in Home Assistant using Ollama

Поділитися
Вставка
  • Опубліковано 16 кві 2024
  • ► MY HOME ASSISTANT INSTALLATION METHODS FREE WEBINAR - automatelike.pro/webinar
    ► DOWNLOAD MY FREE SMART HOME GLOSSARY - automatelike.pro/glossary
    ► MY RECORDING GEAR
    MAIN CAMERA: amzn.to/3Ln8qzb
    MAIN & 2ND ANGLE LENS: amzn.to/48bhxMZ
    2ND ANGLE CAMERA: amzn.to/44RjRWs
    SD CARDS: amzn.to/3sT7fRy & amzn.to/3sS0wHu
    MICROPHONE: amzn.to/466Kxne
    BACKUP MIC: amzn.to/468BSkb
    EDITING MACHINE: amzn.to/45LWdvS
    ► SUPPORT MY WORK
    Paypal - www.paypal.me/kpeyanski
    Patreon - / kpeyanski
    Bitcoin - 1GnUtPEXaeCUVWdJxCfDaKkvcwf247akva
    Revolut - revolut.me/kiriltk3x
    Join this channel to get access to perks - / @kpeyanski
    ✅ Don't Forget to like 👍 comment ✍ and subscribe to my channel!
    ► MY ARTICLE ABOUT THAT TOPIC - peyanski.com/home-assistant-o...
    ► DISCLAIMER
    Some of the links above are affiliate links. If you click on these links and purchase an item I will earn a small commission with no additional cost for you. Of course, you don’t have to do so in case you don’t want to support my work!
  • Навчання та стиль

КОМЕНТАРІ • 71

  • @KPeyanski
    @KPeyanski  2 місяці тому +1

    Are you going to try this Home Assistant Ollama Integration? And if yes, on what kind of device are you going to install the Ollama software?

  • @bugsub
    @bugsub 2 місяці тому +1

    Wow! Fantastic tutorial! Really appreciate your channel!

    • @KPeyanski
      @KPeyanski  2 місяці тому

      Glad it was helpful and thanks for the kind words!

  • @RocketBoom1966
    @RocketBoom1966 2 місяці тому +4

    Thank you, excellent content as usual. I have setup Ollama running in a Docker container on my Unraid server. The server has a low power Nvidia GPU which I make use of to speed up responses.
    Another fun thing to try is to modify the end of the prompt template with something like this:
    Answer the user's questions using the information about this smart home.
    Keep your answers brief and do not apologize. Speak in the style of Captain Picard from Star Trek.
    Yes, my assistant will respond with answers in the style of Captain Picard.

    • @KPeyanski
      @KPeyanski  2 місяці тому

      Oh that is very interesting thanks for the info, but how you make the HA Ollama Integration to answer with voice?

    • @RocketBoom1966
      @RocketBoom1966 2 місяці тому

      @@KPeyanski I have seen it done, however I have struggled to make it work. My modified prompt template only responds in text form as you explained in your video. Things are moving so fast with these AI integrations, I imagine it won't be long until Home Assistant includes powerful AI tools by default. Exciting times.

    • @KPeyanski
      @KPeyanski  2 місяці тому

      exciting times indeed :)

    • @EvgenMo1111
      @EvgenMo1111 День тому

      hi, what size is your LLM?

  • @joeking5211
    @joeking5211 Місяць тому

    Looks a fantastic vid. Will keep an eye open for the Windows tutorial and come back then.

    • @KPeyanski
      @KPeyanski  Місяць тому

      it is almost the same for windows. You just have to install the ollama windows version and everything else is the same

  • @FrankGraffagnino
    @FrankGraffagnino 2 місяці тому +1

    I _REALLY_ appreciate a tutorial that shows how to do this with a local LLM... very cool. Thanks!

    • @KPeyanski
      @KPeyanski  2 місяці тому

      You're very welcome! Are you going to try it and on what device?

    • @FrankGraffagnino
      @FrankGraffagnino 2 місяці тому +1

      @@KPeyanski probably not yet. But I just love when consumers can be better educated about local control. Thanks!

    • @KPeyanski
      @KPeyanski  2 місяці тому

      Yes, I also prefer local. Unfortunately it is not always an option.

  • @AlonsoVPR
    @AlonsoVPR 2 місяці тому +3

    I was waiting for someone to make a video about this! thank you sir!!

    • @KPeyanski
      @KPeyanski  2 місяці тому

      Glad it was helpful! On what kind of device are you going to install the Ollama software?

    • @AlonsoVPR
      @AlonsoVPR 2 місяці тому +1

      @@KPeyanski I don't have enough horsepower for this at the moment, I'm into low consumption at the moment but I'm thinking on getting a proxmox server with a dedicated GPU, At the moment all my house runs on a 2012 i5 Mac mini with 8gb of ram also using proxmox

    • @KPeyanski
      @KPeyanski  2 місяці тому +1

      I understand, low power consumption is important but i5 is not that bad and you can try Ollama on it. If it is not OK just delete/uninstall it!

    • @AlonsoVPR
      @AlonsoVPR 2 місяці тому

      @@KPeyanski Maybe when I get a better server with more ram :P sadly my old mac mini has 8gb of ram soldered to the motherboard and all my services are using about 72% of the ram at the moment:P
      Now I'm struggling on finding a good zigbee mmwave sensor that doesn't spams the network :/ Any recomendations?
      I have tried the TUYA-M100 and the MTG275-ZB-RL. although the MTG275-ZB-RL is way better than the TUYA it's still spamming my zigbee network several times per second

    • @ecotts
      @ecotts 2 місяці тому

      I'm waiting for someone to make a video about all the data that META stole from your system as a result of the installation and then sold on to some random companies.

  • @BrettVilnis
    @BrettVilnis 2 місяці тому

    Thanks, excellent video.

    • @KPeyanski
      @KPeyanski  2 місяці тому

      Glad you enjoyed it! Are you going to try it?

    • @BrettVilnis
      @BrettVilnis 2 місяці тому

      @@KPeyanski When voice is working

    • @KPeyanski
      @KPeyanski  2 місяці тому

      no idea, hopefully soon

  • @SmartTechArabic
    @SmartTechArabic 5 днів тому

    Thanks for the informative tutorial. I have set Ollama server on a separate server, and it the local LLM is working well through the open web-UI, and I setup the Olama integration on home assistant, and I setup a home assistant assist to use Ollama. But unfortunately whenever I ask a qesution, I am not getting any response. What am I missing?

    • @KPeyanski
      @KPeyanski  2 дні тому

      try debug on your pipeline and check what is going on...

  • @floor18fdb
    @floor18fdb 2 місяці тому

    So for the llama I need a second device to be always on? Is it possibly to install it directly on a hass server

    • @KPeyanski
      @KPeyanski  2 місяці тому +1

      No, with this integration this is not possible. At least for now...

  • @Palleri
    @Palleri 2 місяці тому +3

    Could you share the prompt template you are using?

  • @PauloAbreu
    @PauloAbreu 2 місяці тому

    Great tutorial! Thanks. Is English the only language available?

    • @KPeyanski
      @KPeyanski  2 місяці тому

      not sure about that, but I think yes!

  • @danninoash
    @danninoash 2 місяці тому

    Hi, great video first of all, THANKS!!
    What is missing to me is the BT proxy...how do I configure it? it is a must? why this part isn't mentioned in the video? :(

    • @KPeyanski
      @KPeyanski  2 місяці тому +1

      BT proxy is not needed at all here. The communication between Home Assistant and Ollama is over the IP network, so just follow the steps from the video and you will have it noting additional is needed

    • @danninoash
      @danninoash 2 місяці тому

      @@KPeyanski SORRY!! I confused my question with another video of yours - the creating Apple Watch as a device in HA LOL :))

    • @danninoash
      @danninoash 2 місяці тому

      @@KPeyanski What I wanted to ask here actually is - will I have to put a machine that will be turned on for 24\7?? (whether it's Win\LinuxMacOS)
      I didn't fully understand what should I do with it after I connect my HA with the Ollama integration?
      Qeustion #2 please - does it interrupts somehow to my Alexa or it works alongside it?
      THANKS!!

    • @danninoash
      @danninoash 2 місяці тому

      ???

  • @miguelcid1965
    @miguelcid1965 2 місяці тому

    With llama is it able to turn on lights or entities in general? I read in the integration page of Hassio that with the llama integration it isnt possible, but maybe was that before? Thanks.

    • @marcomow
      @marcomow 25 днів тому

      now it's possible, upgrade HA to 2024.06!

  • @fred7flinstone
    @fred7flinstone 25 днів тому

    I am getting "Unexpected error during intent recognition".

  • @michaelthompson657
    @michaelthompson657 2 місяці тому

    Im assuming since it can be installed on Linux you could have this on a separate pi on raspberry pi os lite and connect it to your other pi running HA? Just I have HA on a pi 4 and have a spare pi 3, just wondering if the pi 3 would be powerful enough to run ollama?

    • @KPeyanski
      @KPeyanski  2 місяці тому

      This is interesting indeed, but I guess you have to try it out. It will be best if you share the result!

    • @michaelthompson657
      @michaelthompson657 2 місяці тому

      @@KPeyanski do you think I could install it on raspberry pi os lite? Im very inexperienced with pi os

    • @KPeyanski
      @KPeyanski  2 місяці тому

      I don't know, you can try...

    • @michaelthompson657
      @michaelthompson657 2 місяці тому

      @@KPeyanski I’m not that good 🤣

  • @jacquesdupontd
    @jacquesdupontd Місяць тому

    Thanks for the very good video. I know that you can now make a pretty good integration of GPT in HA and have a trigger and speech exchanges. I imagine it's gonne be even easier and perfect (and creepier at the same time) with GPT-4o. I'm sure we'll be able to control devices and have speech and trigger soon for Ollama. I subscribe to your channel

    • @KPeyanski
      @KPeyanski  Місяць тому +1

      Thanks for subscribing! Yes, integrating GPT into Home Assistant is becoming increasingly seamless, and GPT-4 will likely make it even more intuitive and powerful. It's exciting (and a bit creepy) to think about how advanced and interactive our smart homes can become soon. Stay tuned for more updates!

    • @jacquesdupontd
      @jacquesdupontd Місяць тому

      @@KPeyanski I'm doing the researches to build some kind of Amazon Echo with Local LLM and maybe with a screen. A bit like the ESP32-S3-BOX but better. Not for commercialisation for now (i'm sure there are tons of projects like that being developped). I'm still not sure about what device to use to handle the local LLM. A GPU is a huge plus but takes too much place. The best would be a Mac Mini M1, Ollama LLMS works wonder on it. I have to check how well works Asahi linux and if i can pack everything in it (personnal home server, Home Assistant, Ollama, Voice assistant)

    • @jacquesdupontd
      @jacquesdupontd 12 днів тому

      Little update. I now have a few ESP32 (KORVO, S3, Atom Echo) and i've been playing a bit (you can check my last videos to see my little setup). For now i'm only using external A.I because Ollama is not able to control our devices yet and also, it is still quite slow compared to Google or GPT. It's working great. My next project is to take a bluetooth speaker and hack it with an ESP32-S3 to make it become a Voice Assistant device like Google Nest or Amazon Echo Dot

  • @markrgriffin
    @markrgriffin 2 місяці тому

    Probably a dumb question, but how do I expose OLLAMA on my network if I install on Windows. Instructions are not very specific

    • @KPeyanski
      @KPeyanski  2 місяці тому

      Follow the instructions from the Ollama documentation and add the Ollama IP in your OLLAMA_HOST variable. These are the steps:
      On windows, Ollama inherits your user and system environment variables.
      First Quit Ollama by clicking on it in the task bar
      Edit system environment variables from the control panel
      Edit or create New variable(s) for your user account for OLLAMA_HOST, OLLAMA_MODELS, etc.
      Click OK/Apply to save
      Run ollama from a new terminal window

    • @markrgriffin
      @markrgriffin 2 місяці тому +1

      @KPeyanski thanks for the reply. So just add the two variables names? With no values? That's where I'm stuck unfortunately. Do I not need to add a path to OLLAMA_MODELS and an ip for the host as variables?

  • @sirmax91
    @sirmax91 2 місяці тому

    can you make it run on raspberry pi 5 and link it to home assiatant

    • @KPeyanski
      @KPeyanski  2 місяці тому

      I think yes, but I guess you have to try it.

  • @MichaelDomer
    @MichaelDomer 2 місяці тому +1

    Get rid of that llama2, version 3 that was just released completely destroys it.

    • @KPeyanski
      @KPeyanski  2 місяці тому

      sounds good, are you using it already? And for what exactly?

  • @hpsfresh
    @hpsfresh 16 днів тому

    This video needs chapters time codes

    • @KPeyanski
      @KPeyanski  14 днів тому

      sorry, I'm too lazy for that right now and there is no one willing to help either...

  • @OrlandoPaco
    @OrlandoPaco 2 місяці тому

    Add voice!

    • @KPeyanski
      @KPeyanski  2 місяці тому

      Yes, voice is needed here... Maybe in the next release!

  • @KubedPixel
    @KubedPixel 2 місяці тому +6

    Under NO CIRCUMSTANCES is anything facebook related going ANYWHERE near my network, offline/local or not.

    • @KPeyanski
      @KPeyanski  2 місяці тому +2

      no problem, you can select another model that have nothing in common with Meta & facebook

    • @andrewtfluck
      @andrewtfluck 2 місяці тому +3

      Ollama, the tool, is separate from Facebook/Meta. You can run Llama on it, but you have a variety of other LLMs to choose from.

    • @KubedPixel
      @KubedPixel 2 місяці тому

      @@andrewtfluck WhatsApp WAS a separate tool to Facebook.. not any more.
      Ollama was developed by meta (Facebook) and I'm 99% there's 'call home' beacons in the code somewhere. Also, just out of principle, I will not use anything Facebook related.

    • @Busy_Paws
      @Busy_Paws Місяць тому +1

      Paranoia

  • @ecotts
    @ecotts 2 місяці тому +3

    I will never in my life add anything META related intentionally on any of my systems. Hell No!! 😂

  • @rude_people_die_young
    @rude_people_die_young 2 місяці тому

    Shouldn’t be hard to do function calling hey

    • @KPeyanski
      @KPeyanski  2 місяці тому

      you mean voice function hey or something else?

    • @rude_people_die_young
      @rude_people_die_young 2 місяці тому

      @@KPeyanski I mean where the LLM emits valid JSON that can be used in commands or API calls. It’s a confusing AI term.