Thanks for the review of my custom integration. Extreamly well presented as always! I'll be working on the auto YAML output next, it's nearly there. And hopefully the local LLM if people would also like that. ❤
The timing of this video is perfect. I was at the HA get together at the Github office this last weekend. Paulus and Frank were there and during the discussion session, I and another participant talked about having a function that did something similar to the AI assistant. We said that we would like a function that looked at usage patterns, that it, who did what, with what entities and made suggestions about automations. AI Assistant is a good first step. I believe I will get involved in this project. Thanks.
Awesome!!! Hope you had great time! I've organised local meetup here with Paulus in July this year - it was a blast (plus early spoiler of upcoming voice assistant device). Yes, this one is great start and I know that they were also looking at this, how to bring AI in HA to help with Automation.
Hi! Thanks for the video! Very interesting topic, to be honest I've started to prepare similar project but will hold on for a sec as this one seems to be mature enough. Not sure about part which says "will suggest automations based on newly added devices". What about existing ones, should I re-add all my integrations? I hope that project will move thru all planned phases - well written maybe a game changer for home assistant which is far away from being ... inteligent. Also local Ollama / LMStudio would be a nice to have if this solution at some point will need to send a huge amount of data from SQL database logs to cloud.
Not sure how this will work with LLM, but for this type of integration speed is not critical - I wouldn't care if it would take hours for it to chew the data, as this is not voice assistant that requires fast responses. Did you start working on your own version or were you in planning phase?
I have some thoughts on logging newly added devices and existing entities so that we can segregate this type of request. I will add it to the issues log to be worked on. Loving all these ideas to make this integration better - Thank you
Thanks for the video! I have never gone down the path of getting an API key. I see a lot of different models. Do you have a recommendation for someone new to API keys, please?
The OpenAI API keys operate on a pay-as-you-go model, meaning that more requests increase the cost, but it's only a matter of pence. Another option is to use local LLMs, which involve only compute costs. However, with this custom integration, speed is not a critical factor, making it more suitable for this use case. I plan to enhance the custom integration over the next few weeks.
How does the billing work with these things. Presumably it doesn't bill 1c (or whatever) for each query. Do you pay in an amount up front that gets drawn down for each query, then top up when it gets low.
You can top up your OpenAI account with "an" amount, then each query will take from it. It's important to set limits on your OpenAI account, but the amount will last for a long time
Do you have any errors in the log file? You can try adding this to configuration.yaml file: logger: default: warning logs: custom_components.ai_suggester: debug openai: debug Also in troubleshoot section there is part related to dependency - github.com/ITSpecialist111/ai_automation_suggester?tab=readme-ov-file#troubleshooting
I don't see my answer (again) - do you see anything in log files? You can try to enable debugging for this integration and check if there is anything in logs too... Looks like it is not loading up. There is only one prerequisite - maybe that's the problem. Check troubleshooting section on GitHub... And logs, check logs.
With the help of the developer Graham I was able to get it working. We deleted the integration from HACS. Deleted the directory ai_automation_suggester in custom_components and restarted home assistant. Then reinstalled the integration again and used the new API key and everything worked as expected. Great Video and Great integration.
The end goal is to always have a human involved in the process. However, the Suggester could generate the YAML automation for you, allowing you to simply press "GO" to execute it. I'm exploring ways to assist people in creating exciting automations without requiring extensive initial cognitive skills. While it's rewarding to develop automations from scratch, it's also beneficial to understand the range of possibilities and discover features you might have overlooked ;) 'Skynet rulez
The ChatGPT chatbot is charged seperately to accessing the API's. The API access is charged on a Pay as you go bases. Which isn't very much at all for GPT-4o-mini :) Other models will be available soon
Would be cool to call out to "free" services but that's not how the models work today. I'm working on integrating to Ollama for local model access which then would be at no cost for open source models.
Sorry but as soon as you said it has to have access to an external "AI" processor, I lost interest which resulted in a thumbs down. The whole idea and core purpose of home assistant is for local control. I won't have a 3rd party having access to devices on my network.
Local access to the LLM will be available in the coming weeks, allowing for complete local control. I understand that Home Assistant users prefer everything to be local. However, for the initial version of the integration, it was simpler to build on public APIs for ease of use. Local access is part of our roadmap, so please be patient.
Thanks for the review of my custom integration. Extreamly well presented as always! I'll be working on the auto YAML output next, it's nearly there. And hopefully the local LLM if people would also like that. ❤
This is an amazing integration. Thank you to the developer and thank you. Bearded tinker for letting us know about it.
The timing of this video is perfect. I was at the HA get together at the Github office this last weekend. Paulus and Frank were there and during the discussion session, I and another participant talked about having a function that did something similar to the AI assistant. We said that we would like a function that looked at usage patterns, that it, who did what, with what entities and made suggestions about automations. AI Assistant is a good first step. I believe I will get involved in this project. Thanks.
Awesome!!! Hope you had great time! I've organised local meetup here with Paulus in July this year - it was a blast (plus early spoiler of upcoming voice assistant device).
Yes, this one is great start and I know that they were also looking at this, how to bring AI in HA to help with Automation.
We were shown the same new device. It was completely working.
Hi!
Thanks for the video! Very interesting topic, to be honest I've started to prepare similar project but will hold on for a sec as this one seems to be mature enough. Not sure about part which says "will suggest automations based on newly added devices". What about existing ones, should I re-add all my integrations?
I hope that project will move thru all planned phases - well written maybe a game changer for home assistant which is far away from being ... inteligent. Also local Ollama / LMStudio would be a nice to have if this solution at some point will need to send a huge amount of data from SQL database logs to cloud.
Not sure how this will work with LLM, but for this type of integration speed is not critical - I wouldn't care if it would take hours for it to chew the data, as this is not voice assistant that requires fast responses.
Did you start working on your own version or were you in planning phase?
I have some thoughts on logging newly added devices and existing entities so that we can segregate this type of request. I will add it to the issues log to be worked on. Loving all these ideas to make this integration better - Thank you
Thanks for the video! I have never gone down the path of getting an API key. I see a lot of different models. Do you have a recommendation for someone new to API keys, please?
The OpenAI API keys operate on a pay-as-you-go model, meaning that more requests increase the cost, but it's only a matter of pence. Another option is to use local LLMs, which involve only compute costs. However, with this custom integration, speed is not a critical factor, making it more suitable for this use case. I plan to enhance the custom integration over the next few weeks.
@@grahamahosking Thanks so much! I made an account and will try the install later today.
How does the billing work with these things. Presumably it doesn't bill 1c (or whatever) for each query. Do you pay in an amount up front that gets drawn down for each query, then top up when it gets low.
You can top up your OpenAI account with "an" amount, then each query will take from it. It's important to set limits on your OpenAI account, but the amount will last for a long time
It keeps telling me the Entity is unavailable. I tried with a new API Key and also checked the funding.
Do you have any errors in the log file?
You can try adding this to configuration.yaml file:
logger:
default: warning
logs:
custom_components.ai_suggester: debug
openai: debug
Also in troubleshoot section there is part related to dependency - github.com/ITSpecialist111/ai_automation_suggester?tab=readme-ov-file#troubleshooting
I don't see my answer (again) - do you see anything in log files?
You can try to enable debugging for this integration and check if there is anything in logs too... Looks like it is not loading up. There is only one prerequisite - maybe that's the problem.
Check troubleshooting section on GitHub... And logs, check logs.
With the help of the developer Graham I was able to get it working. We deleted the integration from HACS. Deleted the directory ai_automation_suggester in custom_components and restarted home assistant. Then reinstalled the integration again and used the new API key and everything worked as expected. Great Video and Great integration.
Now we get Skynet in HA 😄
Skynet lives forever ;)
The end goal is to always have a human involved in the process. However, the Suggester could generate the YAML automation for you, allowing you to simply press "GO" to execute it. I'm exploring ways to assist people in creating exciting automations without requiring extensive initial cognitive skills. While it's rewarding to develop automations from scratch, it's also beneficial to understand the range of possibilities and discover features you might have overlooked ;) 'Skynet rulez
So if you have ChatGPT plus would you still get charged?
I would say yes, but really not sure. API tokens are different than pure ChatGPT.
The ChatGPT chatbot is charged seperately to accessing the API's. The API access is charged on a Pay as you go bases. Which isn't very much at all for GPT-4o-mini :) Other models will be available soon
@ thanks! So even if you have the subscription it’s only first part, no api
@michaelthompson657 no APIs in the chatbot side no. Head over to the openai playground to setup your usage
@ no problem, thanks!
so for the free plan this not working?
I don't think so - API access is different than free accoutn AFAIK.
Would be cool to call out to "free" services but that's not how the models work today. I'm working on integrating to Ollama for local model access which then would be at no cost for open source models.
Will this work with Gemini?
Yes the latest version works with AI Providers: OpenAI, Anthropic, Google, Groq, LocalAI, and Ollama for their AI models and APIs.
@ thank you
Sorry but as soon as you said it has to have access to an external "AI" processor, I lost interest which resulted in a thumbs down.
The whole idea and core purpose of home assistant is for local control. I won't have a 3rd party having access to devices on my network.
Local access to the LLM will be available in the coming weeks, allowing for complete local control. I understand that Home Assistant users prefer everything to be local. However, for the initial version of the integration, it was simpler to build on public APIs for ease of use. Local access is part of our roadmap, so please be patient.
*Watchman report - Missing Entities: 1*
🧔beard.bearded_tinker [missing]
😂 very short one, but there will be a video on that too later this year 😂😉😉
Missing Beard Integration - Repair?