You 100% deserve more subs. As an IT systems engineer who just bought his first house, I was about to embark on the daunting task of "smartifying" my home using 100% diy/homebrew resources. I didn't really feel like allowing google or amazon to monitor my home environment. I was about spend my entire night blue printing and researching this. I was astonished to see that you thought the same as I did and already the hard part! Much appreciated!
I am in the same boat rowing next to you and away from any AI that is not in my control. I am fully expecting to come up on charges in the future for having unplugged Alexa the other day.
Lots of people are reporting that they don't have a volume called "localstorage" in portainer in the dropdown box. I may have generated it without knowing sometime while testing different things out. To create a volume, in portainer select volumes -> +Add Volume -> then put localstorage in as the name and hit create volume. Edit: I've discovered that an unclean shutdown (power loss for example) can corrupt the camera configuration in AI tools. I'd recommend copying the "cameras" folder to a second location in case this happens.
Awesome as usual. @ 15:38 on the Trigger URL; I misunderstood this to be the trigger that initiates the AI, not the trigger that starts the HD recording. I was using the SD URL and couldn't figure out how to get the HD recording. Also you can include multiple Trigger URLs to start multiple cameras recording.
Hey awesome video, many thanks. However when setting up DeepStack in Portainer, when clicking Deploy, it hangs for ages and an error about the image not existing. Any ideas?
This vid is amazingly detailed with information. and that without any useless chit/chat other tubers fill their 30min vids with. Great work Rob! your style is inspiring!
Best "UA-camr"? Not sure there's such a person/channel. Best on the topic of home security & smart home? Well... lemme put it this way. UA-cam has been a consistent win for me researching topics related to a whole whack load of projects, but on the home S&S front, useless. Everyone I've found till now is pretty useless, basically shills for sponsors, all the channels are pretty much the same, and treats the project as novelty. It's taken UA-cam a good 6 months to recommend/return-from-a-search your channel. Granted the internet is full of garbage, but I know quality when I come across it. Im gonna start my binge watching of channel and quite optimistic I will learn what I need to learn to get my project done.
The way you explain and breakdown things. Your students will be so intelligent. I wish I had such a teacher. All the way from Nigeria, I've watched ALL your videos
I don't know about intelligent, but FOR SURE WELL EDUCATED. Rob has what it takes to be to teach, he has an awesome way to keep you focus and engaged on his videos.
@@AussieInSeattle - I can so relate with your comment, I had a multitude of teachers when I was in school. There were teachers who actually cared about you learning, and would try whatever they could so that you would learn. In my case it was Dr Lipkin, I has been a long time since I left school, and I think of him everyday, he told us every time the class was over "If you didn't learn anything new today, you have wasted your day".
Installed it myself without any issues. The wifi signal is great even ua-cam.com/users/postUgkxE_D_sddGAdiVUKp7PkkmyxO7bRtgqmk5 when the router is located far away. I really enjoy the night vision feature that allows me to see clearly any activity near our house (mostly cars and wild animals). The motion detection is helpful to me as well to monitor what happens on our front.
You are the Best! I've been looking for something like this for years!! It would be GREAT if you could do an annual update on this system, to talk about changes and updates.
Uuuf! Wow! Hats off! This was one of the most complicated topics I’ve ever seen put so simple that I believe every Home Assistant advanced user should be able to repeat them. I’ll be for sure doing this. I had thoughts in about a year to get somethink like this together, but I didn’t find tools which you found. Respect! I love your videos, please continue in doing such a good content! Thanks a lot!
I just realized that DeepStack wasn't some cloud service and could be run on prem... that's a game changer! I have basically the same setup as you describe here, and am actually writing this from my camera server since I forgot to log out of RDP, lol. Thanks for making this easy to follow tutorial!
Holy moly I just installed Home Assistant the other day, had to troubleshoot, and found your channel. *musical interlude: A WHOLE NEW WORLD*... seriously, thank you so much for making all these videos. I know it's gotta be a ton of work, but it's a godsend for those of us tinkering around in this crazy pandemic era! Thanks so much!
Thanks a lot. This video was what i needed as I was hesitating between buying a prebuilt NAS (like Synology) and an older desktop PC. Now that i've seen the power of Blue Iris, I decided that I need a desktop pc for my needs. Thank you so much.
I am in the same situation ... For local AI detection capabilities on a synology you'd have to spend quite a buck to reach necessary specs ... Synology's own local AI software is a thing but can only be found on 3 of their best models.
Ok, I've been running this now for about 3 days and I can't overstate how much of a positive difference this has made on the reliability of my triggers. Rob, this is, without question, the most impactful video so far. I had been thinking about deepstack for a while but just hadn't made time to dig into it. Your links and info for AI Tool has saved me countless hours trying to come up with something similar. I still have that work to do for the inside monitoring but this has made a big difference for people detection outside.
How did you get the triggers to work? I’m sending: localhost:81/admin?camera=GrgFrtRt&trigger&user=anadminuser&pw=correctpassword The camera doesn’t trigger and in the log it shows AuthFailed: {"reason":"account disabled "} I did find if I enable the Anonymous account with or without a password I then get an "Authorization required" message in my browser; text only not a pop-up dialog. I'm copying and pasting all info so there aren't any typo's. I see the AI bit is working and detecting but I've thus far been unable to get the trigger working.
Limit decoding (This is taken from the BlueIris help document for version 5) The option to Limit decoding unless required is another way to manage CPU resources. When enabled, only key frames are normally decoded and displayed. A key frame is a “complete” frame-all other frames rely on key frames in order to be rendered, as they contain only the “changes” from frame to frame. When you select the camera in the main window UI, or if someone is viewing the camera (or one of its groups) via a client app, then all frames will once again be decoded for display. This CPU-saving scheme works great as long as your camera is actually sending an adequate number of key frames. It is recommended to have about 1 key frame/second coming from the camera. This is a setting in the camera’s browser-based settings, usually under a “video encoding” section. It may be labeled as “key frame rate” or “i-frame interval” for example. You can view the actual rate on either the General page in camera settings, or on the Cameras page in Status. It is shown after the overall frame rate-for example 15.0/1.0 indicates 15 fps with 1 key frame/second. A value of 0.5 or less is considered insufficient to use this feature.
Great work, sir. Blue Iris just got another customer from your in-depth and ridiculously helpful video. Thank you very much for going through each step of the setup. Followed to a tee and it's up and running.
I've been sitting on the sidelines wondering if I should consider switching from Surveillance Station to Blue Iris for months now and this video pushed me over the edge. I bought a used i7 from eBay, installed a fresh copy of Win 10 and Blue Iris and follow your directions. I've never been happier with my camera setup! Thank you. I did want to point out one thing that tripped me up. When setting up the motion events tab, "Object Detection" under the Advanced section was ticked by default. If you leave this enabled, presumably BI is going to filter most motion events for what it thinks are real objects. I was confused why so many of my events were getting missed until disabling this checkbox. Hopefully this helps someone.
Wow your videos are always amazing Rob, but this one is something special! Thank you for all the work in condensing all those steps into a single, concise video!
Excellent video. I already have most of the same tools running but picked up some good ideas from how you've set things up. Two comments: "limit decoding unless required" is definitely not something to check because it sounds good. It can cause you to miss motion events. See the Blue Iris help for info. Also the AI Tool cool down doesn't work the way you think. It doesn't stop analyzing images during the cool down. It continues to analyze every image and doesn't fire the trigger again for subsequent events that happen within the cool down time. Again mis-use could cause events to be missed.
Please do a follow up video showing how you turn this into texts saying "Person in back yard" with image. Or "Dog in front driveway" with image. Please!!!
It's possible, but not as is with what Rob shows. There's a fork of AI Tool that saves the last triggered snapshot as the "camera" name. Just set up a "camera" for each type of interest. Then with folder_watcher in HA you can trigger different things based on which image updated. It would be nice if the AI Tool could send an MQTT packet directly to HA.
Check it out here: kleypot.com/motion-event-notifications-with-locally-processed-ai/ The trick is, his mod keeps a jpg for each “camera” of the last triggered image. Make as many “cameras” of each camera for individual detections you want. The other thing that hung me up for a while was missing the detail about folder watcher and needing to specify that monitoring my jpg “ external dir” dir was ok. Maybe there’s a way you can send HA directly this info as another url?
@@tgschaef Hi, I'm a bit of a newbie at this but I am currently using the AI tool 1.67 ver 7 and have it working well. But I do not understand how to install this new fork Is t there a different download that I can run?
I’ve gone in a third direction on this. I liked this third one because it sends a MQTT message directly to Home Assistant with all of the detection details. Then I have logic in home assistant if it should trigger the HD recording. With the full details you can do things based on dog detected in driveway, person on front porch overnight, etc. I had a few hiccups getting it to work, let me know if you have problems. The biggest thing that took me a while to realize is you run it in a Windows Docker container. Including the DeepStack container. Which is nice to not be doing unsupported things in Home Assistant VM. github.com/danecreekphotography/node-deepstackai-trigger
This is EPIC. I recently bought a Nest Cam and really dislike the lack of customizability (alerts because it's raining -_-). Truth be told I didn't know anything about Home Assistant... I'm slowly going through your videos and realizing I can return those nest cams and get something way more robust!! ... time to build that dedicated pc... Keep up the good work!
I've been completely Windows free for 5 years after 20 years of wanting to be here - I'm not going to end that streak now. Technically I do have a windows machine - an old junker I use to program ham radio gear - but that's not connected to any network, its a single purpose machine.
@The Hook Up I just relocated my HASSIO and had to redo portainer and deepquestai no longer works with the AI tool out of the box. There is a custom Docker image called deepquestai/deepstack:noavx that works using the configuration you said here. deepquestai/deepstack.latest is now invalid and all ones listed do not work. Hope someone fines this when it doesn't work and saves them some time. I spent hours troubleshooting this. Please consider an update to this video using the last AI Tool which changes how triggers and looks pretty cool.
Limit decoding The option to Limit decoding unless required is another way to manage CPU resources. When enabled, only key frames are normally decoded and displayed. A key frame is a “complete” frame-all other frames rely on key frames in order to be rendered, as they contain only the “changes” from frame to frame. When you select the camera in the main window UI, or if someone is viewing the camera (or one of its groups) via a client app, then all frames will once again be decoded for display. This CPU-saving scheme works great as long as your camera is actually sending an adequate number of key frames. It is recommended to have about 1 key frame/second coming from the camera. This is a setting in the camera’s browser-based settings, usually under a “video encoding” section. It may be labeled as “key frame rate” or “i-frame interval” for example. You can view the actual rate on either the General page in camera settings, or on the Cameras page in Status. It is shown after the overall frame rate-for example 15.0/1.0 indicates 15 fps with 1 key frame/second. A value of 0.5 or less is considered insufficient to use this feature.
@@grahametobin5297 so it looks like you could fairly easily integrate it with nodered, the URL that it AITool calls can now have all sorts of extra information included, you could have AITool call the nodered endpoint and then have nodered call blueiris
I stumbled across one of your other videos on cameras and found this. You have a new subscriber here. This video was awesome and answered so many of my questions and your setup is absolutely what I want to do. You'll also have a new patreon subscriber in me. The amount of knowledge you have in this area is impressive!
Great video for viewers with quite advanced computing skills. For those without, like myself, I find that SightHound can do a lot (but not all) of this from quite a dated laptop. It was relatively intuitive, reasonably priced, and I have found the support team very responsive. No affiliation by the way, as I am in Australia. Just passing on my experience for those with less computer skills than those required to perform the process outlined in this video.
Thanks for this video. I just got this working using version 1.67 of Gentle Pumpkin's program. Many things have changed so you need to read his instructions to get it to work now. It works on the main camera stream so is now very resource hungry.
After following your guide, i have come to wonder. It actually looks like that the hd camera is not doing anything. it is basically not being triggered. It is only the sd camera that is triggered, so no alerts will occur from the hd cameras.The AI is analyzing fine, but no signal is being sent to the hd cameras, so there is only low res video and no hd recording going on. and as the hd cameras is not triggered, then there wont be sent a message to the MQTT. So it looks like some part is missing in the video. by the way.. The "Limit decoding unless required" is making the realtime stream stop, so you will only get an image for every 5 second as you set the trigger of the jpeg images to be. so to get live images, it needs to be checked. Edit*** I just realized that the trigger url needs to be the url for the hd camera, and not the sd camera. Might be an idea to add that in the description. My hd cameras is now recording when AI is triggering them
I did notice that his keyframe ratio is pretty bad at .3, it should be at least a 1 for BI to record stuff correctly. This can be changed in the cameras GUI.
I too discovered that you have to use the HD camera trigger URL in order for the AI server to notify BI to record on the HD camera stream. Had to re-watch this a few times to make sure I didn't miss anything in the video about it. I can't get the MQTT message from BI to update the binary sensors. The text file shown at about 18:29 has seemingly shortened topic names. I don't understand where "BI/AI/" come from. He has "BlueIris" as client ID in BI MQTT config screen and no "AI" anywhere on that screen. I must be missing something.
@kent You are the man. After pulling my hair out for two days trying to figure this out I decided to read the comments on the video since google was not helping. This post here was the missing sauce. Thank you. @Rob needs to probably update the notes of this vide because this portion he flips through real fast.
I didn't see where/how you set up the 4k resolution recordings after disabling the motion triggers. You said at 11:28 we would let the AI trigger that, but then you never showed how to get the 4k recording started AI tool results. Appreciate the video, been a huge help!
Nevermind, I figured it out, in the Trigger URL you use the HD camera's URL instead of the SD camera's url - Not sure you explicitly said use the HD camera's URL, had to read your pasted link to figure it out. Will leave this comment here in case others are confused too. Thanks a bunch!
Absolute legend. Taking inspiration, rebuilt a fresh Win10 VM and stood this up in about 2 hours with 12 cams. Very impressed, even with Windows bloat, it runs with less CPU than my equivalent Linux VM with the same cams. Best of all Motion detection is finally accurate!
Thanks for the info on how to set this up, I have went from nothing to a full custom ai surveillance system in 2 days. I cant wait for the bear that has been causing trouble in my chicken coop to return so I can get some good video of my dog putting his ass in check.
It seems that at least in my case Blue Iris is extremely temperamental when it comes to creating the JPEGs for whatever reason... I'm having to restart the program every time I change a camera setting because it stops creating JPEGs... EDIT: It's temperamental with creating motion events at night in general, I'm definitely having to tweak motion settings and I completely turned off Object Detection in BI which helped.
Not sure if things changed, but I've been using BI for 2+ years now and haven't had issues with it creating JPEGs. As of the recent update just this month, BI added a TON of new features like (FINALLY) timeline on the web browser view mode, so try it again maybe ?
Rob, First things first - great video as always. The Blue Iris setup is very informative for new users, and nice to see new integrations with camera vision. I'm personally looking into this more in my own setup! That said, a couple pointers for Blue Iris: 1) For each of your cameras, un-check "Enable overlays" in the Video tab, as you are forcing Blue Iris re-encode the video to add the overlays (camera name, timestamp, etc). Most modern cameras have this ability out of the box, so enabling it there instead will offload that work to each of your cameras and greatly reduce Blue Iris workload. 2) "Limit decoding unless required" will only decode the the video's key frames, unless all frames are required (such as in the case of overlays or to transcode). 3) Side note: Don't forget audio! It's hardly any extra space, and a big aid to the visual. Keep up the great work, stay safe!
I don’t think enable overlays does anything if you have direct to disk enabled. I only enable overlays on export, which re encodes anyways. Edit: I’m testing this out right now
Okay, tests complete. It appears that adding overlays to all 18 cameras (9 cameras x 2 streams each) increases the total CPU usage by about 9%, which is strange because that's about the same amount of CPU increase I experience from a single re-encode. I wonder how it is so much more efficient with overlays.
Upon further investigation: I enabled overlays on the cameras themselves and the 9% CPU savings went away. I think it is just caused by the increased number of changing areas that needs to be rendered in UI3, doesn't matter if BI is inserting them, or if they are hardcoded into the RTSP.
@@TheHookUp Very interesting, and good thought train! So I just went and tested a theory on my own instance. My CPU% numbers are capped to the "most common 5% range", and they are all from one web UI user connected and also the console up (Remote Desktop connection): - 35-40% with overlays disabled on both Blue Iris and the cameras themselves - 36-41% with overlays disabled on Blue Iris, but enabled on the cameras - 38-43% with overlays enabled on both Blue Iris and my 6 Reolink cameras My theory is that your "Blue Iris Evaluation Version" is in effect an overlay, so I deactivated my installation and reverted back to the evaluation version, which yielded: - 42-47% with overlays disabled on both Blue Iris and my 6 Reolink cameras, but in evaluation version I went back into Blue Iris and re-registered the machine, left the overlays enabled in both places, and quite literally the second after I clicked "Finish" and the "Blue Iris Evaluation Version" overlay went away, my CPU% went right back down to 37-42%. That said, I am fairly confident that the difference is the overlays being drawn for users actively browsing the interface. While this is not a typical situation, it's something that does happen from time to time as I've reverse proxied Blue Iris through a hardened Linux box for remote access (as opposed to forwarding ports to the Blue Iris web server, which I can't vet, or opening everything up to China servers). All of that said, even a 1-2% difference by turning off the overlays in Blue Iris and letting my cameras draw them is better than nothing I suppose. It's likely just the difference in text size, however I personally enjoy the Reolink font and don't mind watermark as a free marketing since they're such awesome cameras, and in the end I think I'll leave the overlays to the cameras. :)
You sir are amazing, i think iam just failing tho xD I tried to install Home Assistant like youre other tutorial with Virtual box VM but couldent get it to work at all (Dont worry i did follow some other guide i found and got that working) And now i cant get Deepstack to work, first i had a problem with the local storage thing but i saw you wrote about it in the comments down below. So i tried to use the Widows program and in docker on my Synology nas but come into the same error on both. In the AI tool its stuck on Proccessing and in the logs it says it cant connect to Deepstack. Ill just keep trying, and you keep upload great guides!
This may help others following instructions: 1) See Rob's comment about creating a portainer volume, as it doesn't show in the list when creating your datastore (here's his comment in case you can't find it: in portainer select volumes -> +Add Volume -> then put localstorage in as the name and hit create volume) 2) When I hit deploy on the Deepstack container, it would fail with an error that didn't really help ("no such image"). After a lot of trial and error I realised this was due to low disk space (Hassio). My Hassio was given 6GB (capacity predetermined by the Hassio VHDX I downloaded when I installed Hassio). For me, I was running in a virtal environment (Hyper-V) so I expanded the disk to 12GB (really I could have given it 128...) then booted Hassio into Gparted (download the ISO from here gparted.org/download.php). I ran Gparted, hit "fix" when it started up then resized the sda8 to fill the newly expanded drive. After reboot, the problem was solved.
Thanks Matt. I had the same problem. Have ultimately fixed it by doing a bunch of things, 1) getting rid of all the snapshots on my VirtualBox VM image, 2) increasing the size of the Home assistant VM drive to something much larger, 3) using Gparted to increase the size of the HA partiion on the Vm drive (ony possible after previous step). This still didnt solve the problem so I then 4) downloaded the Deepstack docker image using the Home Assistant command line (see other comments re this below) rather than Portainer. Then setting up the Deepstack container using Portainer once it had been downloaded (but unticking the "Always try to pull container" option from the Portainer set up). This got the container going, but then I had to increase the Ram in my VM from 2 to 4gb to get the Deepstack server to work, otherwise I got the "Cannot access server" error from the AI Tool when I got to that stage. I'm not sure which of all the above steps are required. I would try using the command line to download the container before resizing the drive.
Did you mention the hardware, specifically poe adapter and how you have your hardware and network set up in this or another video? You mentioned the pc specs but maybe I missed that part. Thanks!
WOW. I had to watch this video about 5 times, but I have now set it up and it is working perfectly. I have also setup flows in Node-RED to trigger lights based on motion detected and send notifications to my iphone via HA - very few false triggers. I would love to see you do a video on the Node-RED side of this. I'm signing up for your Patreon now - this video is absolute gold. Also - I love the pace of your videos - they make my head spin but your instructions are precise and concise. Thanks for all your hard work.
Thanks for the howto! I've run into an issue where this is reported in the AI Tool Logs "Can't reach DeepQuestAI Server at 192.168.0.2:89" - I can confirm that I can hit the URL on the same machine using a browser and I get the DeepStack screen which says "Your DeepStack Installation is Activated. " I've got no idea why this isn't working?
One note, for others... Rob sets his "Combine, or cut video's into" 1 hour intervals.... It wasn't till today that i noticed what that actually did.... It will take all clips that get recorded in the particular hour and combine it into one video, If you want more of a Create one video per event.... Lower thins number. Also another setting you might want to play with is, "Cut video when triggered with break time"
Great video, would love to see a follow up getting into node red and specifically sending iOS notifications when a person is detected. I implemented what you have here, struggling getting my cameras to trigger motion at night, and getting the iOS notification to work with the picture.
Great video! Will have to try this out. There is also a good Home assistant custom component for blueiris that I’ve been using which also runs with MQTT. Might be worth checking out and incorporating into this!
Wow, this was super informative. I'm somewhat invested on a different NVR (syno), but the pipeline to deepstack is actually very smart and probably I can adapt it. This was inmensely helpful.
+1 Would love to be able to do this, but the need for Windows is a no go for me. Been messing with Motion Eye, but haven't been able to get it to reliably work yet.
I have something working, it's by no means polished, but just using motion alone I allow motion to do it's motion detection on the low res stream, it saves the images to disk, I take 1 from every second and feed it to deepstack, if all the sampled frames from that motion event come back with no motion then the event is deleted. Which now has me wondering if I can leverage my intel neural compute stick for this fun and games too :D As I'm currently using the swann nvr (and in fact am pulling streams from it) I would probably end up deleting all the footage and just noting the times a "person" is visible at the front door so I can look it up in swann (cos there's no api :() but once we move it might be worth looking at setting up a nice UI, I was just looking to use unify stuff but this is interesting. Shame about the windows
Finally got around to replicating this in blueiris... can definitely do this with motion and a couple of tools.... just need a nice ui wrapped around that which for me is the hard part
Brilliant video, brilliant channel. Wish i found it years ago. Just moved to my new house and it’s time to think about a surveillance setup again. I’m pumped.
I haven't done much with MQTT but could not get this working following your example. Since I have a camera named "front yard" I just put in exactly what you had. I finally figured out that the topic and payload should be exactly the same in BI and HA, not completely different like your example.
What a great setup and breakdown of its components I was about to go for Synology ‘s surveillance system but I was holding back on that due to the licensing costs, then this review came up... thank you so much
I just ran across this while looking at Home Assistant integrations. I'm currently testing out a SimCam Alloy 1S camera with simple AI like this one to detect vehicles vs people vs animals, etc. The camera itself only has 1080p, but one plus for it is basic 2D facial recognition that you train with a smartphone. Essentially, you can separate alerts of an unknown person vs you.
I have been researching this for months. I had seen AItool before but was not use a Blueiris user. Once the video came out it seemed so accessible that I bought BI and set everything up.You should ask for an affiliate link dude! The only thing I am struggling with is that the motion detection its not consistent. I can walk and be 10 minutes in front of the camera and it wont create a motion event IE no jpeg in ai folder. I am sure once I figure it out it will be amazing!!! Thank you!
Thanks a lot Rob. I followed your instructions and got the software side up and running. The most difficult part for me was typing Annke... Ankee? Annkee? Ankke? correctly.
Blue Iris can send an MQTT message "on reset" i.e. when the motion detection trigger is over. That way you don't need to set an "off_delay" in the binary_sensor definition in HA. Instead, set a "payload_off" and Blue Iris will set it back to "off" if you tell it to.
Yeah, I think it ends up working about the same. The difference is it makes my life a little easier in node-red if it only sends the positive messages so I handle it that way.
Thanks so much for this! I've had Blue Iris for a few years and had given up on tweaking motion trigger settings to reduce the number of false alerts. Deepstack seems like a great solution! I went through your guide and got to the end. The AITool url was triggering images correctly but I realized it would only trigger when I refresh the url in my browser. I started digging into GentlePumpkin's AI write up and noticed that I was using AITool v1.67 which apparently has a different setup than v1.65 & below. I tried following GentlePumpkin's write up (it seems like v1.67 doesn't setup duplicate cameras SD & HD?) but now the AITool url makes the sound as if it triggers but it's not taking images. I'm a little stumped, as your guide was extremely comprehensive, however I think GentlePumpkin wasn't quite as in depth with his instructions. I'm curious if you've upgraded your AITool to v1.67 and if you had any issues?
I saw another post on here talk about increasing the VM's ram. This corrected the issue for me, however I'm still having a problem with getting the HD recording to either be triggered to to actually record.
@@arkis194 I figured out the problem. I had multiple profiles setup previously that were conflicting because if you have profiles setup, you have to configure each camera for every profile that you want to use Deepstack & AITool.
@@arkis194 I'm stuck at this part, I've got the deepstack screen, when I put the query string into the url I get the response that Rob shows in the video, I believe I installed the latest AI tools, can you be a little more specific on the details of increasing the VMs ram? thank you.
I can tell you had a lot of fun setting this up and making the video! You have to know it turned out great! With most of these videos I'm left with questions, but with yours the only thing I'm left with is wanting to do it myself! Perhaps an ALPR video to detect if any car that's not yours enters your driveway is next? I've fiddled with OpenALPR before and really liked it!
Unfortunately, Florida laws have ruined any chance I have of using ALPR because we don't require front license plates, so unless someone decides to back into my driveway I'll never see their plate.
Tuya are shafting Home Assistant users too ! What you might not know is that Tuya recently upgraded the firmware on their devices. This is preventing over the air flashing from being done with tools such as Tuya convert. Yes you can physically open the plugs etc and flash the ESP chip over serial, but with bulbs etc that's not really practical. The only player I can recommend at the moment is Sonoff. They have a DIY mode on newer devices especially for allowing you to install your own firmware such as Tasmota.
Amazing video! I am about to build my first home system and your videos have helped so much, decisions, decisions lol. I'm going for decent quality image, low cost, and want to use an existing computer because i have so many i have built and acquired that are quite decent but need a purpose and you just gave them all some great purposes and me projects for all this time at home. Again thank you for your work.
This was the first tutorial video that i've ever seen that made me want to join a patreon... considering the climate i doubt i can stay long but i really wanted to support this level of content
Have been watching "The Hook Up" videos and greatly appreciate them. One request that I have is a video on best cameras for Blue Iris. Have seen lots of great videos on various systems but that is one topic I can't recall seeing.
this video was SUPER INFORMATIONAL! I liked & Subscribed to you because you have saved me tons of time and money with this video. Awesome stuff! I wish I could like and subscribe 50x times over. Thanks again!
Great video, PACKED with info!! I've been doing IT for a long time, and although I kept up with you on the integration run through... which was awesome... I can only imagine how many times people with have to rewind or pause!!! You actually opened up a few ways that I wasn't aware BI could do... and because of some of the CPU overload issues I was having, I was moving to another platform... but I am doing to double check (like I was with watching this) that they may not work. Unfortunately BI ran much heavier then another older WIN based NVR, and the Linux based NVR I am considering.
Hey Christopher, I'm curious which Linux based NVR you were considering? Did you end up going with it? I loved this video for the budget constraints and motion detection confidence, but Windows is a turn off. I'm a Linux admin, so your comment got me interested!
Absolutely the best UA-cam videos! Thank you for all that you put out there for us to learn from. Is there any chance you could please make a video showing us how to add audio to the POE security system? I'd like to build a system using the Annke C-800 cameras but I'd also like to add audio to them. I'm pretty lost on how to add it if there's only one audio input and if that's even needed running ethernet. Maybe it's possible to split off of the ethernet at the camera and tie in a microphone at that point? If that was possible could you do that with multiple cameras or would the system only be able to record one audio transmission at a time? Thanks again and I look forward to many more videos from you! God bless!
Brilliant. Thanks Rob for sharing all this knowledge and packaging it up into an accessible format (even if I'm going to need to rewatch this a few times). I've been hoping you would do this video for a while - top job!
@The Hook Up Is it possible you can share your notification automation to receive gif or pre-recorded video in there IOS push notification? I'm able to get my snapshot notification but pre-recorded video. the example shown at 3:00 min mark in this video.
If you use the BlueIris integration for home assistant, it sets up the motion sensors for you, plus video streams. You wont have to manually configure each mqtt sensor.
I tried using the motion sensors from BI that are exposed to HA via the integration but they are not getting updated when the external trigger URL is called by the AI server and BI acknowledges the trigger and records a clip. I even tried the external trigger sensor in HA but those aren't getting updated at all. Any tips? Nothing about BI sensors in the HA logs. Cameras are streaming fine to HA. It's just the said binary sensors. Thanks.
The cameras listed in the description are the ones you use? Really good quality. I would love to do something like this, I'd be starting from scratch with no smart home items and really have no knowledge about this stuff. Watching a lot of your videos and ones you recommend. Great stuff! Wish I new more.
You 100% deserve more subs.
As an IT systems engineer who just bought his first house, I was about to embark on the daunting task of "smartifying" my home using 100% diy/homebrew resources. I didn't really feel like allowing google or amazon to monitor my home environment.
I was about spend my entire night blue printing and researching this. I was astonished to see that you thought the same as I did and already the hard part! Much appreciated!
I am in the same boat rowing next to you and away from any AI that is not in my control. I am fully expecting to come up on charges in the future for having unplugged Alexa the other day.
Lots of people are reporting that they don't have a volume called "localstorage" in portainer in the dropdown box. I may have generated it without knowing sometime while testing different things out. To create a volume, in portainer select volumes -> +Add Volume -> then put localstorage in as the name and hit create volume.
Edit: I've discovered that an unclean shutdown (power loss for example) can corrupt the camera configuration in AI tools. I'd recommend copying the "cameras" folder to a second location in case this happens.
Awesome as usual. @ 15:38 on the Trigger URL; I misunderstood this to be the trigger that initiates the AI, not the trigger that starts the HD recording. I was using the SD URL and couldn't figure out how to get the HD recording. Also you can include multiple Trigger URLs to start multiple cameras recording.
Do you have a preferred POE switch you're using for the cameras? Thanks
Hey awesome video, many thanks. However when setting up DeepStack in Portainer, when clicking Deploy, it hangs for ages and an error about the image not existing. Any ideas?
@@pixal8d The docker image won't work on a raspberry pi, as it's not arm compatible. I ran into the same issue.
@@pixal8d community.home-assistant.io/t/portainer-issue-with-creation-of-new-container/194757/8
This might help
This vid is amazingly detailed with information. and that without any useless chit/chat other tubers fill their 30min vids with. Great work Rob! your style is inspiring!
Except no hardware coverage,
So shit video
You are hands-down the best UA-camr out there. Thank you immensely for all of the hard work and effort you put into this!
I appreciate it
@@TheHookUp You are an excellent communicator, and put things into context, unlike many other reviewers that do videos like long TV commercials.
No doubt!
Best "UA-camr"? Not sure there's such a person/channel. Best on the topic of home security & smart home? Well... lemme put it this way. UA-cam has been a consistent win for me researching topics related to a whole whack load of projects, but on the home S&S front, useless. Everyone I've found till now is pretty useless, basically shills for sponsors, all the channels are pretty much the same, and treats the project as novelty. It's taken UA-cam a good 6 months to recommend/return-from-a-search your channel. Granted the internet is full of garbage, but I know quality when I come across it. Im gonna start my binge watching of channel and quite optimistic I will learn what I need to learn to get my project done.
The way you explain and breakdown things. Your students will be so intelligent. I wish I had such a teacher. All the way from Nigeria, I've watched ALL your videos
Agreed on this. I had a very similar teacher in high school that set me on my path to success
I don't know about intelligent, but FOR SURE WELL EDUCATED. Rob has what it takes to be to teach, he has an awesome way to keep you focus and engaged on his videos.
@@AussieInSeattle - I can so relate with your comment, I had a multitude of teachers when I was in school. There were teachers who actually cared about you learning, and would try whatever they could so that you would learn. In my case it was Dr Lipkin, I has been a long time since I left school, and I think of him everyday, he told us every time the class was over "If you didn't learn anything new today, you have wasted your day".
Installed it myself without any issues. The wifi signal is great even ua-cam.com/users/postUgkxE_D_sddGAdiVUKp7PkkmyxO7bRtgqmk5 when the router is located far away. I really enjoy the night vision feature that allows me to see clearly any activity near our house (mostly cars and wild animals). The motion detection is helpful to me as well to monitor what happens on our front.
DUDE! Really? Hands down one of my favorite channels! Always super informative, clearly described and succinct. Continue setting the bar!
Thanks!
You are the Best! I've been looking for something like this for years!! It would be GREAT if you could do an annual update on this system, to talk about changes and updates.
Uuuf! Wow! Hats off! This was one of the most complicated topics I’ve ever seen put so simple that I believe every Home Assistant advanced user should be able to repeat them. I’ll be for sure doing this. I had thoughts in about a year to get somethink like this together, but I didn’t find tools which you found. Respect! I love your videos, please continue in doing such a good content! Thanks a lot!
I love how fast you go in your videos! I'm sure lots of people hate it, but that's what rewind is for!!!
Fantastic, I've been looking forward to a video like this one. Time to actually start using my BlueIris server to its full potential! Great guide!
Clearly One of THE best presenters on UA-cam, bar none
I just realized that DeepStack wasn't some cloud service and could be run on prem... that's a game changer! I have basically the same setup as you describe here, and am actually writing this from my camera server since I forgot to log out of RDP, lol. Thanks for making this easy to follow tutorial!
Holy moly I just installed Home Assistant the other day, had to troubleshoot, and found your channel. *musical interlude: A WHOLE NEW WORLD*... seriously, thank you so much for making all these videos. I know it's gotta be a ton of work, but it's a godsend for those of us tinkering around in this crazy pandemic era! Thanks so much!
Thanks a lot. This video was what i needed as I was hesitating between buying a prebuilt NAS (like Synology) and an older desktop PC. Now that i've seen the power of Blue Iris, I decided that I need a desktop pc for my needs. Thank you so much.
I am in the same situation ... For local AI detection capabilities on a synology you'd have to spend quite a buck to reach necessary specs ... Synology's own local AI software is a thing but can only be found on 3 of their best models.
Ok, I've been running this now for about 3 days and I can't overstate how much of a positive difference this has made on the reliability of my triggers.
Rob, this is, without question, the most impactful video so far. I had been thinking about deepstack for a while but just hadn't made time to dig into it. Your links and info for AI Tool has saved me countless hours trying to come up with something similar. I still have that work to do for the inside monitoring but this has made a big difference for people detection outside.
Glad to hear that Andrew!
How did you get the triggers to work? I’m sending: localhost:81/admin?camera=GrgFrtRt&trigger&user=anadminuser&pw=correctpassword
The camera doesn’t trigger and in the log it shows AuthFailed: {"reason":"account disabled "}
I did find if I enable the Anonymous account with or without a password I then get an "Authorization required" message in my browser; text only not a pop-up dialog. I'm copying and pasting all info so there aren't any typo's. I see the AI bit is working and detecting but I've thus far been unable to get the trigger working.
Fantastic video, Rob, and extremely thorough. Thank you for all of the hard work you obviously put into all of your videos
Limit decoding (This is taken from the BlueIris help document for version 5)
The option to Limit decoding unless required is another way to manage CPU resources.
When enabled, only key frames are normally decoded and displayed. A key frame is a
“complete” frame-all other frames rely on key frames in order to be rendered, as they
contain only the “changes” from frame to frame. When you select the camera in the main
window UI, or if someone is viewing the camera (or one of its groups) via a client app, then
all frames will once again be decoded for display.
This CPU-saving scheme works great as long as your camera is actually sending an adequate
number of key frames. It is recommended to have about 1 key frame/second coming from
the camera. This is a setting in the camera’s browser-based settings, usually under a “video
encoding” section. It may be labeled as “key frame rate” or “i-frame interval” for example.
You can view the actual rate on either the General page in camera settings, or on the
Cameras page in Status. It is shown after the overall frame rate-for example 15.0/1.0
indicates 15 fps with 1 key frame/second. A value of 0.5 or less is considered insufficient to
use this feature.
I just moved into a new house and am looking to set cameras up. This is a perfectly timed video. Thanks for all the great info
Great work, sir. Blue Iris just got another customer from your in-depth and ridiculously helpful video. Thank you very much for going through each step of the setup. Followed to a tee and it's up and running.
Excellent in-depth video, your pacing is perfect and helps to keep an otherwise boring topic very interesting. Thank you for doing this!
I've been sitting on the sidelines wondering if I should consider switching from Surveillance Station to Blue Iris for months now and this video pushed me over the edge. I bought a used i7 from eBay, installed a fresh copy of Win 10 and Blue Iris and follow your directions. I've never been happier with my camera setup! Thank you. I did want to point out one thing that tripped me up. When setting up the motion events tab, "Object Detection" under the Advanced section was ticked by default. If you leave this enabled, presumably BI is going to filter most motion events for what it thinks are real objects. I was confused why so many of my events were getting missed until disabling this checkbox. Hopefully this helps someone.
Thanks for the tip!
Wow your videos are always amazing Rob, but this one is something special! Thank you for all the work in condensing all those steps into a single, concise video!
Excellent video. I already have most of the same tools running but picked up some good ideas from how you've set things up. Two comments: "limit decoding unless required" is definitely not something to check because it sounds good. It can cause you to miss motion events. See the Blue Iris help for info. Also the AI Tool cool down doesn't work the way you think. It doesn't stop analyzing images during the cool down. It continues to analyze every image and doesn't fire the trigger again for subsequent events that happen within the cool down time. Again mis-use could cause events to be missed.
Please do a follow up video showing how you turn this into texts saying "Person in back yard" with image. Or "Dog in front driveway" with image. Please!!!
It's possible, but not as is with what Rob shows. There's a fork of AI Tool that saves the last triggered snapshot as the "camera" name. Just set up a "camera" for each type of interest. Then with folder_watcher in HA you can trigger different things based on which image updated. It would be nice if the AI Tool could send an MQTT packet directly to HA.
@@tgschaef Thanks! Will try this!
Check it out here: kleypot.com/motion-event-notifications-with-locally-processed-ai/
The trick is, his mod keeps a jpg for each “camera” of the last triggered image. Make as many “cameras” of each camera for individual detections you want. The other thing that hung me up for a while was missing the detail about folder watcher and needing to specify that monitoring my jpg “ external dir” dir was ok. Maybe there’s a way you can send HA directly this info as another url?
@@tgschaef Hi, I'm a bit of a newbie at this but I am currently using the AI tool 1.67 ver 7 and have it working well. But I do not understand how to install this new fork Is t there a different download that I can run?
I’ve gone in a third direction on this. I liked this third one because it sends a MQTT message directly to Home Assistant with all of the detection details. Then I have logic in home assistant if it should trigger the HD recording. With the full details you can do things based on dog detected in driveway, person on front porch overnight, etc. I had a few hiccups getting it to work, let me know if you have problems. The biggest thing that took me a while to realize is you run it in a Windows Docker container. Including the DeepStack container. Which is nice to not be doing unsupported things in Home Assistant VM.
github.com/danecreekphotography/node-deepstackai-trigger
This is EPIC. I recently bought a Nest Cam and really dislike the lack of customizability (alerts because it's raining -_-). Truth be told I didn't know anything about Home Assistant... I'm slowly going through your videos and realizing I can return those nest cams and get something way more robust!! ... time to build that dedicated pc... Keep up the good work!
i cant believe you've convinced me to add a windows box for this
I've been completely Windows free for 5 years after 20 years of wanting to be here - I'm not going to end that streak now. Technically I do have a windows machine - an old junker I use to program ham radio gear - but that's not connected to any network, its a single purpose machine.
THIS is one of your BEST EVER How-To videos! AWESOME! Very Well Done!
WOW, now THATS indepth. Very cool solution. I think I'm just going to start with a VPN to my network first
Awesome set up! Thanks for taking the time to make this video! you gave me several idea's for my ever evolving setup.
"ever evolving" is an understatement :)
@The Hook Up I just relocated my HASSIO and had to redo portainer and deepquestai no longer works with the AI tool out of the box. There is a custom Docker image called deepquestai/deepstack:noavx that works using the configuration you said here. deepquestai/deepstack.latest is now invalid and all ones listed do not work. Hope someone fines this when it doesn't work and saves them some time. I spent hours troubleshooting this.
Please consider an update to this video using the last AI Tool which changes how triggers and looks pretty cool.
i got the same error using deepquestai/deepstack:noavx
got it working by pulling the image via SSH :)
Limit decoding
The option to Limit decoding unless required is another way to manage CPU resources.
When enabled, only key frames are normally decoded and displayed. A key frame is a
“complete” frame-all other frames rely on key frames in order to be rendered, as they
contain only the “changes” from frame to frame. When you select the camera in the main
window UI, or if someone is viewing the camera (or one of its groups) via a client app, then
all frames will once again be decoded for display.
This CPU-saving scheme works great as long as your camera is actually sending an adequate
number of key frames. It is recommended to have about 1 key frame/second coming from
the camera. This is a setting in the camera’s browser-based settings, usually under a “video
encoding” section. It may be labeled as “key frame rate” or “i-frame interval” for example.
You can view the actual rate on either the General page in camera settings, or on the
Cameras page in Status. It is shown after the overall frame rate-for example 15.0/1.0
indicates 15 fps with 1 key frame/second. A value of 0.5 or less is considered insufficient to
use this feature.
Nice, thanks
Can you please share Node-Red workflow or do a video on it. Thanks for getting me hooked on!
Even though this video is a couple of years old now, it still helped me allot! Thanks for making it.
I would love to see a video of how you integrated this in with NodeRed.
Inspiring video and I too would be interested in the associated Node Red Flow if at all possible please. Thanks for making these videos.
@@grahametobin5297 so it looks like you could fairly easily integrate it with nodered, the URL that it AITool calls can now have all sorts of extra information included, you could have AITool call the nodered endpoint and then have nodered call blueiris
I stumbled across one of your other videos on cameras and found this. You have a new subscriber here. This video was awesome and answered so many of my questions and your setup is absolutely what I want to do. You'll also have a new patreon subscriber in me. The amount of knowledge you have in this area is impressive!
Thanks Brian!
Great video for viewers with quite advanced computing skills. For those without, like myself, I find that SightHound can do a lot (but not all) of this from quite a dated laptop. It was relatively intuitive, reasonably priced, and I have found the support team very responsive. No affiliation by the way, as I am in Australia. Just passing on my experience for those with less computer skills than those required to perform the process outlined in this video.
Thanks for this video. I just got this working using version 1.67 of Gentle Pumpkin's program. Many things have changed so you need to read his instructions to get it to work now. It works on the main camera stream so is now very resource hungry.
After following your guide, i have come to wonder. It actually looks like that the hd camera is not doing anything. it is basically not being triggered. It is only the sd camera that is triggered, so no alerts will occur from the hd cameras.The AI is analyzing fine, but no signal is being sent to the hd cameras, so there is only low res video and no hd recording going on. and as the hd cameras is not triggered, then there wont be sent a message to the MQTT. So it looks like some part is missing in the video.
by the way.. The "Limit decoding unless required" is making the realtime stream stop, so you will only get an image for every 5 second as you set the trigger of the jpeg images to be. so to get live images, it needs to be checked.
Edit*** I just realized that the trigger url needs to be the url for the hd camera, and not the sd camera. Might be an idea to add that in the description. My hd cameras is now recording when AI is triggering them
I did notice that his keyframe ratio is pretty bad at .3, it should be at least a 1 for BI to record stuff correctly. This can be changed in the cameras GUI.
I too discovered that you have to use the HD camera trigger URL in order for the AI server to notify BI to record on the HD camera stream. Had to re-watch this a few times to make sure I didn't miss anything in the video about it.
I can't get the MQTT message from BI to update the binary sensors. The text file shown at about 18:29 has seemingly shortened topic names. I don't understand where "BI/AI/" come from. He has "BlueIris" as client ID in BI MQTT config screen and no "AI" anywhere on that screen. I must be missing something.
@@mannyAKAmanny I have the exact same problem... struggling to get the right settings there.... let me know if you find a solution :-)
@kent You are the man. After pulling my hair out for two days trying to figure this out I decided to read the comments on the video since google was not helping. This post here was the missing sauce. Thank you. @Rob needs to probably update the notes of this vide because this portion he flips through real fast.
I didn't see where/how you set up the 4k resolution recordings after disabling the motion triggers. You said at 11:28 we would let the AI trigger that, but then you never showed how to get the 4k recording started AI tool results. Appreciate the video, been a huge help!
Nevermind, I figured it out, in the Trigger URL you use the HD camera's URL instead of the SD camera's url - Not sure you explicitly said use the HD camera's URL, had to read your pasted link to figure it out. Will leave this comment here in case others are confused too. Thanks a bunch!
@@jumbozo7600 I too was thrown by this. One of the many 2 hour head scratchers during this install. Nothing teaches like costly mistakes.
@@jumbozo7600 THANK YOU!!! @15:50 I stopped it and noticed it was for the HD Camera!
Seriously great work. Thank you!
Absolute legend. Taking inspiration, rebuilt a fresh Win10 VM and stood this up in about 2 hours with 12 cams. Very impressed, even with Windows bloat, it runs with less CPU than my equivalent Linux VM with the same cams. Best of all Motion detection is finally accurate!
Thanks for the info on how to set this up, I have went from nothing to a full custom ai surveillance system in 2 days. I cant wait for the bear that has been causing trouble in my chicken coop to return so I can get some good video of my dog putting his ass in check.
Soooo glad I don't have bears, I do have some pretty menacing raccoons
I have thieves
Beautiful. I was planning to go with Synology because of ease of use. You just convinced me to use Blue Iris.
I need this in the Garage.
"Where the hell did I put that 10mm"
I've been working on installing cameras and I'm about to configure Blue Iris to get this going. Thank you so much for your amazing guides.
Would be great that they port it to linux, this on linux server + web Ui would be great
FWIW someone made a docker image that actually seems to work.
@@evanlightcap4440 Link to docker image?
Hands down best video for installing the best video recording cameras on your home!! And not to be out done HANDS DOWN YOU HAVE A BEAUTIFUL HOME!!! 👍💯
Thanks Billy!
It seems that at least in my case Blue Iris is extremely temperamental when it comes to creating the JPEGs for whatever reason... I'm having to restart the program every time I change a camera setting because it stops creating JPEGs...
EDIT: It's temperamental with creating motion events at night in general, I'm definitely having to tweak motion settings and I completely turned off Object Detection in BI which helped.
Not sure if things changed, but I've been using BI for 2+ years now and haven't had issues with it creating JPEGs. As of the recent update just this month, BI added a TON of new features like (FINALLY) timeline on the web browser view mode, so try it again maybe
?
Rob,
First things first - great video as always. The Blue Iris setup is very informative for new users, and nice to see new integrations with camera vision. I'm personally looking into this more in my own setup! That said, a couple pointers for Blue Iris:
1) For each of your cameras, un-check "Enable overlays" in the Video tab, as you are forcing Blue Iris re-encode the video to add the overlays (camera name, timestamp, etc). Most modern cameras have this ability out of the box, so enabling it there instead will offload that work to each of your cameras and greatly reduce Blue Iris workload.
2) "Limit decoding unless required" will only decode the the video's key frames, unless all frames are required (such as in the case of overlays or to transcode).
3) Side note: Don't forget audio! It's hardly any extra space, and a big aid to the visual.
Keep up the great work, stay safe!
I don’t think enable overlays does anything if you have direct to disk enabled. I only enable overlays on export, which re encodes anyways.
Edit: I’m testing this out right now
Okay, tests complete. It appears that adding overlays to all 18 cameras (9 cameras x 2 streams each) increases the total CPU usage by about 9%, which is strange because that's about the same amount of CPU increase I experience from a single re-encode. I wonder how it is so much more efficient with overlays.
Upon further investigation: I enabled overlays on the cameras themselves and the 9% CPU savings went away. I think it is just caused by the increased number of changing areas that needs to be rendered in UI3, doesn't matter if BI is inserting them, or if they are hardcoded into the RTSP.
@@TheHookUp Very interesting, and good thought train! So I just went and tested a theory on my own instance. My CPU% numbers are capped to the "most common 5% range", and they are all from one web UI user connected and also the console up (Remote Desktop connection):
- 35-40% with overlays disabled on both Blue Iris and the cameras themselves
- 36-41% with overlays disabled on Blue Iris, but enabled on the cameras
- 38-43% with overlays enabled on both Blue Iris and my 6 Reolink cameras
My theory is that your "Blue Iris Evaluation Version" is in effect an overlay, so I deactivated my installation and reverted back to the evaluation version, which yielded:
- 42-47% with overlays disabled on both Blue Iris and my 6 Reolink cameras, but in evaluation version
I went back into Blue Iris and re-registered the machine, left the overlays enabled in both places, and quite literally the second after I clicked "Finish" and the "Blue Iris Evaluation Version" overlay went away, my CPU% went right back down to 37-42%. That said, I am fairly confident that the difference is the overlays being drawn for users actively browsing the interface. While this is not a typical situation, it's something that does happen from time to time as I've reverse proxied Blue Iris through a hardened Linux box for remote access (as opposed to forwarding ports to the Blue Iris web server, which I can't vet, or opening everything up to China servers).
All of that said, even a 1-2% difference by turning off the overlays in Blue Iris and letting my cameras draw them is better than nothing I suppose. It's likely just the difference in text size, however I personally enjoy the Reolink font and don't mind watermark as a free marketing since they're such awesome cameras, and in the end I think I'll leave the overlays to the cameras. :)
You sir are amazing, i think iam just failing tho xD I tried to install Home Assistant like youre other tutorial with Virtual box VM but couldent get it to work at all (Dont worry i did follow some other guide i found and got that working)
And now i cant get Deepstack to work, first i had a problem with the local storage thing but i saw you wrote about it in the comments down below. So i tried to use the Widows program and in docker on my Synology nas but come into the same error on both. In the AI tool its stuck on Proccessing and in the logs it says it cant connect to Deepstack. Ill just keep trying, and you keep upload great guides!
This may help others following instructions:
1) See Rob's comment about creating a portainer volume, as it doesn't show in the list when creating your datastore (here's his comment in case you can't find it: in portainer select volumes -> +Add Volume -> then put localstorage in as the name and hit create volume)
2) When I hit deploy on the Deepstack container, it would fail with an error that didn't really help ("no such image"). After a lot of trial and error I realised this was due to low disk space (Hassio). My Hassio was given 6GB (capacity predetermined by the Hassio VHDX I downloaded when I installed Hassio). For me, I was running in a virtal environment (Hyper-V) so I expanded the disk to 12GB (really I could have given it 128...) then booted Hassio into Gparted (download the ISO from here gparted.org/download.php). I ran Gparted, hit "fix" when it started up then resized the sda8 to fill the newly expanded drive. After reboot, the problem was solved.
Thanks Matt. I had the same problem. Have ultimately fixed it by doing a bunch of things, 1) getting rid of all the snapshots on my VirtualBox VM image, 2) increasing the size of the Home assistant VM drive to something much larger, 3) using Gparted to increase the size of the HA partiion on the Vm drive (ony possible after previous step). This still didnt solve the problem so I then 4) downloaded the Deepstack docker image using the Home Assistant command line (see other comments re this below) rather than Portainer. Then setting up the Deepstack container using Portainer once it had been downloaded (but unticking the "Always try to pull container" option from the Portainer set up). This got the container going, but then I had to increase the Ram in my VM from 2 to 4gb to get the Deepstack server to work, otherwise I got the "Cannot access server" error from the AI Tool when I got to that stage.
I'm not sure which of all the above steps are required. I would try using the command line to download the container before resizing the drive.
Did you mention the hardware, specifically poe adapter and how you have your hardware and network set up in this or another video? You mentioned the pc specs but maybe I missed that part. Thanks!
I can’t find any info on this
WOW. I had to watch this video about 5 times, but I have now set it up and it is working perfectly. I have also setup flows in Node-RED to trigger lights based on motion detected and send notifications to my iphone via HA - very few false triggers. I would love to see you do a video on the Node-RED side of this. I'm signing up for your Patreon now - this video is absolute gold. Also - I love the pace of your videos - they make my head spin but your instructions are precise and concise. Thanks for all your hard work.
Thanks Paul!
Would you be willing to discuss your flows....I've got it all set up but the notifications piece.
Thanks for the howto! I've run into an issue where this is reported in the AI Tool Logs "Can't reach DeepQuestAI Server at 192.168.0.2:89" - I can confirm that I can hit the URL on the same machine using a browser and I get the DeepStack screen which says "Your DeepStack Installation is Activated. "
I've got no idea why this isn't working?
Same here. I have tried to google the problem and no one has responded back with a resolution
Running into the same issue
Same for me as well.
same issue for me too
Same here. Can't find a solution. I was thinking it might be a firewall issue, but haven't come to a conclusion yet.
One note, for others... Rob sets his "Combine, or cut video's into" 1 hour intervals.... It wasn't till today that i noticed what that actually did.... It will take all clips that get recorded in the particular hour and combine it into one video, If you want more of a Create one video per event.... Lower thins number. Also another setting you might want to play with is, "Cut video when triggered with break time"
"Capture an alert list image" is no longer a selection. What do I choose now?
Hey Matthew, did you manage to figure this out? I'm at the same situation. tried to look in the Blueiris 5 manual, but not luck..
Can't find it either - nothing online I could find to help either
@@davidhughes5152 Still nowhere to be found, so no HD clips are being recorded since there is no trigger for that. really annoying..
Found it... The option has been moved to the trigger tab.
Great video, would love to see a follow up getting into node red and specifically sending iOS notifications when a person is detected. I implemented what you have here, struggling getting my cameras to trigger motion at night, and getting the iOS notification to work with the picture.
Great video! Will have to try this out. There is also a good Home assistant custom component for blueiris that I’ve been using which also runs with MQTT. Might be worth checking out and incorporating into this!
can you add a link to this component?
Wow, this was super informative. I'm somewhat invested on a different NVR (syno), but the pipeline to deepstack is actually very smart and probably I can adapt it. This was inmensely helpful.
Whelp I guess I know what I'm doing with my free time tomorrow 😂
This is an awesome addtion to BI. Thank you for taking the time to put this together!
now to spend the next 3 weeks replicating this in linux :D
+1 Would love to be able to do this, but the need for Windows is a no go for me. Been messing with Motion Eye, but haven't been able to get it to reliably work yet.
I have a vague plan to try and make this work (lets see if I remember it when I wake up) but I can't actually *get* Annke C800's in this country
I have something working, it's by no means polished, but just using motion alone I allow motion to do it's motion detection on the low res stream, it saves the images to disk, I take 1 from every second and feed it to deepstack, if all the sampled frames from that motion event come back with no motion then the event is deleted. Which now has me wondering if I can leverage my intel neural compute stick for this fun and games too :D
As I'm currently using the swann nvr (and in fact am pulling streams from it) I would probably end up deleting all the footage and just noting the times a "person" is visible at the front door so I can look it up in swann (cos there's no api :() but once we move it might be worth looking at setting up a nice UI, I was just looking to use unify stuff but this is interesting. Shame about the windows
Finally got around to replicating this in blueiris... can definitely do this with motion and a couple of tools.... just need a nice ui wrapped around that which for me is the hard part
Excellent rundown! As always Rob. Thank you for that.
Brilliant video, brilliant channel. Wish i found it years ago. Just moved to my new house and it’s time to think about a surveillance setup again. I’m pumped.
I haven't done much with MQTT but could not get this working following your example. Since I have a camera named "front yard" I just put in exactly what you had. I finally figured out that the topic and payload should be exactly the same in BI and HA, not completely different like your example.
What a great setup and breakdown of its components
I was about to go for Synology ‘s surveillance system but I was holding back on that due to the licensing costs, then this review came up... thank you so much
SSS is crazy expensive... I can't believe they are still using a per camera license.
Excellent video man, you are an incredibly informative speaker, and your videos have helped me a ton! Keep it up man, you deserve to hit it big!
Amazing video, just when i was looking for a new security camera setup! Perfect timing and great video
I just ran across this while looking at Home Assistant integrations. I'm currently testing out a SimCam Alloy 1S camera with simple AI like this one to detect vehicles vs people vs animals, etc.
The camera itself only has 1080p, but one plus for it is basic 2D facial recognition that you train with a smartphone. Essentially, you can separate alerts of an unknown person vs you.
Thank you so much for putting together all the best in all your videos. Your videos really help me when I don't know where to start.
I have been researching this for months. I had seen AItool before but was not use a Blueiris user. Once the video came out it seemed so accessible that I bought BI and set everything up.You should ask for an affiliate link dude! The only thing I am struggling with is that the motion detection its not consistent. I can walk and be 10 minutes in front of the camera and it wont create a motion event IE no jpeg in ai folder. I am sure once I figure it out it will be amazing!!! Thank you!
Wow! You killed it! great setup info. I guess I have only been scratching the surface with a couple dozen cams and BlueIris.
Thanks a lot Rob. I followed your instructions and got the software side up and running. The most difficult part for me was typing Annke... Ankee? Annkee? Ankke? correctly.
You should see how many times I misspelled it in my script!
Blue Iris can send an MQTT message "on reset" i.e. when the motion detection trigger is over. That way you don't need to set an "off_delay" in the binary_sensor definition in HA. Instead, set a "payload_off" and Blue Iris will set it back to "off" if you tell it to.
Yeah, I think it ends up working about the same. The difference is it makes my life a little easier in node-red if it only sends the positive messages so I handle it that way.
I have used BlueIris in the past and it is a fantastic program
Thanks so much for this! I've had Blue Iris for a few years and had given up on tweaking motion trigger settings to reduce the number of false alerts. Deepstack seems like a great solution!
I went through your guide and got to the end. The AITool url was triggering images correctly but I realized it would only trigger when I refresh the url in my browser. I started digging into GentlePumpkin's AI write up and noticed that I was using AITool v1.67 which apparently has a different setup than v1.65 & below.
I tried following GentlePumpkin's write up (it seems like v1.67 doesn't setup duplicate cameras SD & HD?) but now the AITool url makes the sound as if it triggers but it's not taking images. I'm a little stumped, as your guide was extremely comprehensive, however I think GentlePumpkin wasn't quite as in depth with his instructions. I'm curious if you've upgraded your AITool to v1.67 and if you had any issues?
I saw another post on here talk about increasing the VM's ram. This corrected the issue for me, however I'm still having a problem with getting the HD recording to either be triggered to to actually record.
@@arkis194 I figured out the problem. I had multiple profiles setup previously that were conflicting because if you have profiles setup, you have to configure each camera for every profile that you want to use Deepstack & AITool.
@@arkis194 I'm stuck at this part, I've got the deepstack screen, when I put the query string into the url I get the response that Rob shows in the video, I believe I installed the latest AI tools, can you be a little more specific on the details of increasing the VMs ram? thank you.
Another great video Rob! Looks like a project for the weekend while in lockdown.
This is super informative. You should do a video to install this ai in an already existing blue iris install with all the cameras already setup
I can tell you had a lot of fun setting this up and making the video! You have to know it turned out great! With most of these videos I'm left with questions, but with yours the only thing I'm left with is wanting to do it myself!
Perhaps an ALPR video to detect if any car that's not yours enters your driveway is next? I've fiddled with OpenALPR before and really liked it!
Unfortunately, Florida laws have ruined any chance I have of using ALPR because we don't require front license plates, so unless someone decides to back into my driveway I'll never see their plate.
@@TheHookUp That's really unfortunate, there goes the idea of automatically opening the garage door based on your license plate
Thanks for the step by step. Got it running in under an hour.
Awesome!
Tuya are shafting Home Assistant users too ! What you might not know is that Tuya recently upgraded the firmware on their devices. This is preventing over the air flashing from being done with tools such as Tuya convert. Yes you can physically open the plugs etc and flash the ESP chip over serial, but with bulbs etc that's not really practical. The only player I can recommend at the moment is Sonoff. They have a DIY mode on newer devices especially for allowing you to install your own firmware such as Tasmota.
Amazing video! I am about to build my first home system and your videos have helped so much, decisions, decisions lol. I'm going for decent quality image, low cost, and want to use an existing computer because i have so many i have built and acquired that are quite decent but need a purpose and you just gave them all some great purposes and me projects for all this time at home. Again thank you for your work.
This was the first tutorial video that i've ever seen that made me want to join a patreon... considering the climate i doubt i can stay long but i really wanted to support this level of content
Thanks!
Have been watching "The Hook Up" videos and greatly appreciate them. One request that I have is a video on best cameras for Blue Iris. Have seen lots of great videos on various systems but that is one topic I can't recall seeing.
this video was SUPER INFORMATIONAL! I liked & Subscribed to you because you have saved me tons of time and money with this video. Awesome stuff! I wish I could like and subscribe 50x times over. Thanks again!
This is amazing! Awesome! Can't wait to try it on my system
Great video, PACKED with info!! I've been doing IT for a long time, and although I kept up with you on the integration run through... which was awesome... I can only imagine how many times people with have to rewind or pause!!!
You actually opened up a few ways that I wasn't aware BI could do... and because of some of the CPU overload issues I was having, I was moving to another platform... but I am doing to double check (like I was with watching this) that they may not work. Unfortunately BI ran much heavier then another older WIN based NVR, and the Linux based NVR I am considering.
Hey Christopher, I'm curious which Linux based NVR you were considering? Did you end up going with it? I loved this video for the budget constraints and motion detection confidence, but Windows is a turn off. I'm a Linux admin, so your comment got me interested!
This video helped me a lot. Got it all setup and working now. Thanks from South Africa👍
Absolutely the best UA-cam videos! Thank you for all that you put out there for us to learn from.
Is there any chance you could please make a video showing us how to add audio to the POE security system?
I'd like to build a system using the Annke C-800 cameras but I'd also like to add audio to them. I'm pretty lost on how to add it if there's only one audio input and if that's even needed running ethernet. Maybe it's possible to split off of the ethernet at the camera and tie in a microphone at that point? If that was possible could you do that with multiple cameras or would the system only be able to record one audio transmission at a time?
Thanks again and I look forward to many more videos from you! God bless!
Bang up video. Thank you for your work on this. Something I'll be using in my BI system. Cheers
Thanks!
Nice tutorial, DeepStack looks like a wonderful piece of software.
Brilliant. Thanks Rob for sharing all this knowledge and packaging it up into an accessible format (even if I'm going to need to rewatch this a few times). I've been hoping you would do this video for a while - top job!
@The Hook Up Is it possible you can share your notification automation to receive gif or pre-recorded video in there IOS push notification? I'm able to get my snapshot notification but pre-recorded video. the example shown at 3:00 min mark in this video.
I recommend setting you pre-buffer to 15 seconds instead of 5 on the HD clips. Seems to work better capturing the whole video.
I think it depends largely on image processing speed (so CPU or GPU speed). YMMV.
Thank you. Will be adding to my HA installation.
With 1100 "thumbs up" and 6 "thumbs down" it's pretty clear you're on the right track. Keep up the great videos!
If you use the BlueIris integration for home assistant, it sets up the motion sensors for you, plus video streams. You wont have to manually configure each mqtt sensor.
I tried using the motion sensors from BI that are exposed to HA via the integration but they are not getting updated when the external trigger URL is called by the AI server and BI acknowledges the trigger and records a clip. I even tried the external trigger sensor in HA but those aren't getting updated at all.
Any tips? Nothing about BI sensors in the HA logs. Cameras are streaming fine to HA. It's just the said binary sensors. Thanks.
This video is a truly hidden gem!
The cameras listed in the description are the ones you use? Really good quality. I would love to do something like this, I'd be starting from scratch with no smart home items and really have no knowledge about this stuff. Watching a lot of your videos and ones you recommend. Great stuff! Wish I new more.