Hey everyone! We have a new updated version of this guide that uses a more advanced model and runs a bit smoother. You can check it out here: ua-cam.com/video/XKIm_R_rIeQ/v-deo.html Please note that we are keeping this old guide up for legacy reasons and that it requires the older Buster OS (the new one is running on the new Bookworm OS).
This was the fastest, cleanest comprehensive guide I have found on OpenCV for Pi. Only thing that would make this better would be an Install script, but even then I think its good for some manual work to be left anyways. Get peoples hands dirty and force them to explore and learn more. So cool to have the power of machine learning and Computer Vision in our hands to explore and experiment with. What a time to be alive!
Very glad you have your system all up and running 🙂 and I absolutely agree. Something about a machine learned system that runs on a palm-sized computer that you have put together yourself really feels like magic ✨✨
Excellent. I came to this after seeing the facial recognition video as it would help with a project I have in mind. However, after seeing this and how easy it is to set up and use my project will be more ambitious. Thanks again and keep up the good work.
Hey tim! I successfully have managed to run this project in about an hour. I didn't compile opencv from source though. I installed it through pip but still got it working and its running pretty smooth. Hope you could change the opencv compiling part as it takes tooo long (took me 3 days and was still unsuccessful)and is unnecessary. Thank you I have used the raspberry 3b+ If you use raspberry pi 4, it could be much faster and smoother
If you can provide some more information I'd happily update the guides 😊 (Perhaps jump onto our core electronics forum and do a quick write up on your process)
Hey this is great, thanks for putting this together. Really easy to follow along as a beginner. Is there a tutorial that builds on this and allows you to connect a speaker to the raspi so that whenever a specific object is detected, it makes a specific noise? Would love to see it!
Such a good idea. Yet to find a project talk about it directly, but where I added the extra code in for the Servo control if you instead replace that with code to set up a speaker and activate it, you would be off to the races. Here is a related guide on speakers - core-electronics.com.au/tutorials/how-to-use-speakers-and-amplifiers-with-your-project.html
Amazing, Easy to follow, Comprehensive video for object detection. Gonna use this to turn my RC car into a autonomous vehicle. Thanks Tim, Keep up the great work :D
Oh man that sounds like an amazing project 😊! Definitely keep me posted on how it goes. The Forum is a great place for a worklog - forum.core-electronics.com.au/
I got error Traceback (most recent call last): File "", line 35 cv2.putText(img,classNames[classId-1].upper(),(box[0] 10,box[1] 30), SyntaxError: invalid syntax What mean with this error, i already install cv2
This is a cool, clear, straightforward video. Well done. Question: does selecting specific objects make the identification faster? for example I only want birds, cats, people to reduce load. Would it work?
Thats a really great question that I am not 100% sure on. My first guess is that you might see a bit of improvement, but I don't think it would be incredibly significant. If you do some of these tests, let us know we are very curious as well!
Size of the boxes tend to be based on the size of the detected object. But the Colour and Width of box can definitely be altered. Inside the code look for the section | if (draw): | Then below that the line | cv2.rectangle(img,box,color=(0,255,0),thickness=2) | By altering the (0,255,0) numbers you can change the colour of the box. By changing the thickness number you can have very thin lines or very bold lines. Font and other aesthetic changes can be done in the following lines.
Hello can it be possible if you can join the animal, object and person or facial recognition at the same time? I'm working that kind of project could you help me sir? Please...
Aww what an excellent idea! You will start wanting more powerful hardware very quickly going down this path. Come check out the Oak-D Lite (which is an excellent way to start stacking multiple AI system whilst still using a Raspberry Pi) - ua-cam.com/video/7BkHcJu57Cg/v-deo.html
@@Core-Electronics how about just identifying if it is an animal, things or a person or some kind of moving object and at the same time it will capture a preview picture of it? How can you make this? and also how to create like if the raspberry pi detects a person in can email to you but if it is not a person it will not email you. Hoping you can help me with my research
Your website, products and educational resources are amazing. I was wondering if you had any advice as to how to further train the machine to identify less common objects? I was hoping to use it for a drone video feed and train it to identify people, for basic search and rescue functions. I am a volunteer in my local community, hence my specific question :-)
How can I add a more detections objects like a light bulb on a wall that turns a curtain color? And can I add code to play a sound on a speaker when detection happens?
I have the perfect application for this but the objects I need to identify are very similar and incredibly difficult for experienced humans to see accurately. Would this just mean supplying more training data to the system?
Hi Tim, Thank you so much on this video for demonstrating how to use OpenCV with the Raspberry Pi. I am willing to follow along your process to install OpenCV and test it out. I am just wondering if OpenCV will run on the new Raspberry Pi OS
At this current stage I would recommend using the older 'Buster' OS with this guide. If you want to use Bullseye with machine scripts come check this guide on the OAK-D Lite - core-electronics.com.au/guides/raspberry-pi/oak-d-lite-raspberry-pi/
great video! good for beginner. I want to get the name of the objects into a string and print it when object detected. Can you give me any tips or help to me? Thank you so much.
Cheers mate! In the main script underneath the line | result, objectInfo = getObjects(img,0.45,0.2) | is another line stating | #print(objectInfo) |. If you delete that | # | then save and run it again you will be printing the name of the identified object to the shell script. Hope that helps 😊
Hii I'm having some problem in my raspberry pi 4 model B while update & upgrade it shows some errors I'm. Going to do object detection on raspberry pi project on my engineering college I have to compete the project before 28 th April so please kindly reply to my comment and pls help me 🙏
Heyya mate, if you are running into errors come check the comment section on the full write up as there is a lot of successful troubleshooting to learn from and you can post some screenshots of your errors there. I'll help get your system running proper 😊
Hey man great video. Any chance you can cover how to use this same concept to detect anomalies instead? Rather than looking for specific objects expected to be there in the camera, the program learns the objects expected to be there and detects when an unusual object is found. Thanks.
Great video, I just came up with an idea for a project using this. I have no experience with Pi's but basically it would be using a camera to detect a squirrel on a bird feeder and then playing some loud noise through a speaker. Would this be a difficult thing to do?
Sounds like an absolutely excellent idea that could definitely be implemented using this kind of Object Detection. We just had a new project posted on our website worth checking out all about using a Raspberry Pi to track Kangaroos and when it does it sends photos of them to a website server - core-electronics.com.au/projects/rooberry-pi
very good video and explanations are well detailed. please I have a project that consists of detecting paper your technique works with other objects but does not work with paper. I don't know if it's possible to teach the system to recognize paper. Thank you
Edge Impulse is your friend here - www.edgeimpulse.com/ This will let you customise already created AI systems like the CoCo Library. Stepping through this system you will be able to modify CoCo library to recognise paper 😊
I've been learning more about this recently. A great way to create custom libraries that a Raspberry Pi can then implement is through Edge Impulse. With this you will be able to train and expand the amount of Animals that default COCO library comes with. Tutorials on this hopefully soon. www.edgeimpulse.com/
100% any USB webcam can work with this script. You will just need to adjust some code. Likely you will just need to change | cap = cv2.VideoCapture(0) | to | cap = cv2.VideoCapture(1) |. Hope that helps 😊.
hey tim ! i seem to encounter a problem while following your instructions on the make -j $(nproc) it stops every time on the 40% and i re-typed and entered the same line several times but it didnt work is there any solution thanks for answering
Will I be able to add an entire category to the list of objects to be displayed in real-time? So instead of saying ['horse'], could I possibly mention a broader category of ['animal'] in the objects parameter? If not, please do let me know the correct way to approach this.
The fastest way would be to just add a long list like ['horse', 'dog', 'elephant'] etc. If you check the full-write up I do something very similar there.
For sure but you will need to create a custom Machine Learnt Edge system. Come check out Edge Impulse, personally I think they are the best in the game for this kind of stuff (and totally free for a maker) - www.edgeimpulse.com/
Hi. I wanted to ask, do you think the raspberry pi Zero cam could be used as a substitute? I'm currently working on a project that involved Raspberry Pi's and camera's and have done a lot of research on what hardware to acquire, I haven't seen much benefit in using the V2 camera instead of the Zerocam. I actually think the raspberry pi zero cam has better specs for its price when compared to the V2.
Should work perfectly fine 😊. If the video data is coming into the Raspberry Pi through the ribbon cable I don't think you would even need to change anything in the script.
Hi Great Video! I know this may be unrelated but how about recognition of objects on screen without a camera? Is there any projects you know of that use AI detection to control the cursor of the computer when it detects an object on screen? Cheers
Cheers mate and excellent ideas. You can definitely feed this system data that has been pre-recorded or streamed in from another location, would require some adjustments to the script. Also in regards to AI detection to control a cursor on a Raspberry Pi come have a look at this video - ua-cam.com/video/hLMfcGKXhPM/v-deo.html
can you create a new dataset annotations for a new object and use it with this coco model? Example, I want to detect a soccer ball. Can I just create annotations with something like datatorch and use those annotations in conjunction with the provided model and weights?
hi, this tutorial helped a lot for my project. i successfully set up and run the codes on raspberry 4 model b terminal, i just couldn't figure out how can i see the video output while the code is running on the terminal (not on geany or thonny). maybe u could help me out :>>
Not quite sure why it wouldn't do that for you when you run the script in the terminal. Come write up a post here at our forum and post some pictures of the situation - forum.core-electronics.com.au/. Reference in the post me and I'll best be able to help 😊
You're the most closet project of my idea in fact it's practically that. But I would like to run it 7/24 during a 10 day period ( my holiday) I would like it press a button 10 minutes after each time it identify a cat (mine) and nothing else : Here is a cat : wait 10 minute press the smart button ( I looking for a way to flush the toilet each time after my cats have done their needs ) is this possible/faisable with this?
Definitely possible and an excellent project to eliminate a chore 😊 or make for an even more in-dependent kitty. The Coco Library used in this guide has | Cat | as one of the animals it can identify. And Raspberry Pi's are excellent at running 24/7. So I reckon your in for a project winner. If you follow through the Full Write up you'll be able to have a system that can Identify Cats (and only cats). That the hard bit done. Solenoids are a way to trigger the button, check this guide for the process on getting one to run with Raspberry Pi - core-electronics.com.au/guides/solenoid-control-with-raspberry-pi-relay/
Absolutely! Here is a straight forward code to send an email through a Python Script. If you merge those two lands together you'll be smooth sailing - raspberrypi-guide.github.io/programming/send-email-notifications#:~:text=Sending%20an%20email%20from%20Python,-Okay%20now%20we&text=import%20yagmail%20%23%20start%20a%20connection,(%22Email%20sent!%22)
You are a legend bro i have a question what if when it detects particular image in my case (garbage) it has to generate a gps location or it has to send the location of that point to another vehicle like you did to your servo motor
Thanks for this. I want to use my pi to do custom recognition of trees from their bark in a portable field unit. I already tried an tensorflow lite and off the shelf database to do common object recognition. If I had a small need to recognize say 50 trees, how many labelled images do I need of each tree for the training data?
Hi Charles, some Australian scientists concluded in a 2020 paper “How many images do I need?” (Saleh Shahinfar, et al) that the minimum number of data points for a class should be in the 150 - 500 range. So if you had 50 species of trees to identify from you'd need roughly between 7,500 - 25,000 images/data points.
Very clever doing it through SSH 😊. It shouldn't be an issue doing it that way so long as you go through all the set up process. If you come write me a message on the core electronics forum under this topic I'll best be able to help you. That way you can sent through screen grabs of your terminal command errors.
Hi, thank you for the explanation and code. I tried the code with the V3 HD camera, but it didn't work. Additionally, can you tell me how to create an autostart for this design? The 5 ways to autostart don't work ("Output:957): Gtk-WARNING **: 19:31:41.632: cannot open display:"). I'm sending a relay with it to keep the chickens away from the terrace with a water jet. Beautiful design! Greetings, Luc.
Hey Luc, To start you will need to update a new driver for the V3 Camera so it can work with the older 'Buster' Raspberry Pi OS. Check out how to do it here - forum.arducam.com/t/16mp-autofocus-raspbian-buster-no-camera-available/2464 - And if you want to autostart your system come check out how here (I would use CronTab) - www.tomshardware.com/how-to/run-script-at-boot-raspberry-pi Come pop to our forum if you need any more help 😊 forum.core-electronics.com.au/latest Kind regards, Tim
@@Core-Electronics I ordered mine the day it was announced and have been running it nonstop a few days now just with a demo detection running just to see how it goes.
It will require some dedicated effort but you can customise this object detection dataset using edge impulse. www.edgeimpulse.com/ That way you can add whatever object or creature you'd like 😊 I hope I understood correctly.
Hi core electronics, I am looking for a lens for my Raspberry Pi HQ camera module... I want good quality image and a closer view for defect detection for my FFF 3D printed parts...can you suggest some lenses. Thanks
There is a microscope lens that might be suitable for looking at 3D print defects. Give that a look. core-electronics.com.au/microscope-lens-for-the-raspberry-pi-high-quality-camera-0-12-1-8x.html
You got heaps of different options. For example, you could run the script automatically every time the Raspberry Pi boots (using Cron Jobs, check here for a guide - ua-cam.com/video/rErAOjACT6w/v-deo.html) or you could run the code remotely using your phone (check here - core-electronics.com.au/tutorials/raspcontrol-raspberry-pi.html)
All the processing is done on the edge, thus you only need the hardware (no calculations happen over Wifi or via the Cloud). So if you had a big enough battery you could definitely run this system via a battery without Internet 😊.
If you want to keep those deer in frame the whole time perhaps an automatic Machine Learned tracking system would help 😊 something like this core-electronics.com.au/guides/Face-Tracking-Raspberry-Pi/
Give Edge Impulse a look at. This library doesn't have that as an object but you can use Edge Impulse to train/modify the standard COCO library to include new objects and things.
3 Hours is definitely too long for installation! Come jump into the full written up article. At the bottom is a whole bunch of successful troubleshooting that you utilise.
@@Core-Electronics Thanks for Reply. I did that successfully. Thanks for your help. One more thing, I want to connect multiple camera with Raspberry pi via GPIO. Is it possible? Can you help me in that?
Hello, great video but how do I get the coordinates of the tracked objects I am trying to build a robot that can identify and pick up objects, how would I find the coordinates
I believe the model we used for this tutorial is capable of identifying birds, I'm unsure if this it is trained specifically to differentiate between pigeons and other birds though. A custom CV model better suited to bird identification would be more reliable.
For anyone it might help out later. I followed the commands on the guide verbatim and was having issues on the, "cmake -D CMAKE_BUILD_TYPE=RELEASE \" and 4 following commands that are all grouped together. I was using, "Right click highlighted text - copy, from the web page and CTRL+Shift+V paste to Terminal to input commands. And that worked great for most of the commands but doesn't appear to work for that last paragraph. I had to manually type it in myself in order for it to work correctly. Tim if you do read this, first of all thanks. But I am a tad lost on exactly when to change back the CONF_SWAPSIZE to 100. I assume after the installation is fully complete but to some of us noobs, its a bit unclear I guess. Also, don't know exactly why, but it says that, "sudo pip3 install numpy" already had its bits installed to Buster so it "might" be redundant. Unless its more of a full proof guild for other versions of the OS. Finally able to finish up my project! Once this finishes installing... :P
Cheers for this write up mate 🙂 I'll legit jump into the guide and make it a little clearer when to swap back the CONF_SWAPSIZE. I'll make it more similar to what I have in my Face Recognition written up guide. My intention is to 'Noob proof' it as best as I can so everyone can have Open-Source Machine learned systems in the palm of their hands that they've created themselves. Very glad you now have it all up and running too!
@@Core-Electronics I didn't think to check your guild on the other OpenCV videos. I'll go do that now. Finally have the next 2 days off so I can fully jump into it. I got past this issue, but now when I run the script its having issues no one else in the forums had. I assume this was due to some issues I may have made when trying to get the multi line command working. Idk so lost with all of this lol. I'm good with Arduino, but Raz Pi, Linux, console commands and scripts vs coding. So much to learn at once. You are such a huge help while lost and overwhelmed in this new little world.
Hey tim! Here's a question, Is the model trained by your coco generated by the yolo algorithm? This is related to the writing of my graduation thesis. I will be more grateful if you can provide more suggestions.
Sorry for getting to this so late. A lot can be learned here - cocodataset.org/ . Also there are a ton of research papers as people are unraveling this technology that are worth exploring (or adding to the bottom of a graduation thesis). Good luck mate!
Hey mate cheers 🙂 the line to alter in code is | cap = cv2.VideoCapture(0) | changing that 0 to another index number that will represent your esp32 camera stream. Come make a forum post if you need any extra hand.
@@Core-Electronics Hi would like to some extra hands on this one. How can I implement esp32 cam as my video stream for real time object detection using the code. Thankss!
Ah I see now, depends on the pest. If your interested in large pests like possums, rats, skunks, baboons or the like then this could be useful. Smaller critters like gross bugs likely not. Unless you had some doorway to the outside where you could watch the bugs come in from and you had a camera up really close.
Hello I am trying to create a design that will recognize different trash types. Does this image recognition able to perceive things like cardboard, paper, tissue, or silver foil as such? like trash items?
Hey Max, Im currently working on a very similar project. My workshop can get a bit messy so I am setting it up to scream at me when it gets untidy. I will report back to you how it goes, or if you've had some luck I'd be more than interested. Cheers!
Hey everyone! We have a new updated version of this guide that uses a more advanced model and runs a bit smoother. You can check it out here: ua-cam.com/video/XKIm_R_rIeQ/v-deo.html
Please note that we are keeping this old guide up for legacy reasons and that it requires the older Buster OS (the new one is running on the new Bookworm OS).
This was the fastest, cleanest comprehensive guide I have found on OpenCV for Pi.
Only thing that would make this better would be an Install script, but even then I think its good for some manual work to be left anyways. Get peoples hands dirty and force them to explore and learn more.
So cool to have the power of machine learning and Computer Vision in our hands to explore and experiment with. What a time to be alive!
Very glad you have your system all up and running 🙂 and I absolutely agree. Something about a machine learned system that runs on a palm-sized computer that you have put together yourself really feels like magic ✨✨
To use a USB cam install fswebcam then change cv2.VideoCapture(0) to cv2.VideoCapture(0, cv2.CAP_V4L2) in the script
whheere should i install this
Trust me . I just find everything I was looking for about my raspberry pi 🌹
Excellent. I came to this after seeing the facial recognition video as it would help with a project I have in mind. However, after seeing this and how easy it is to set up and use my project will be more ambitious. Thanks again and keep up the good work.
thank you, great preview on how to get started !
this was exactly the thing i was looking for. i will be buying things from their store as compensation!
And a really big thanks to you for explaining this so well😁😁
Lost two nights trying to run it on the latest OS! Use the previous one, it is mentioned in the article.
thank you, I was struggling with this and was utterly confused.
This is amazing ! this is soo very cool! Thank you for introducing me to coco!
Thank you VERY much!
Hey tim! I successfully have managed to run this project in about an hour. I didn't compile opencv from source though. I installed it through pip but still got it working and its running pretty smooth. Hope you could change the opencv compiling part as it takes tooo long (took me 3 days and was still unsuccessful)and is unnecessary. Thank you
I have used the raspberry 3b+
If you use raspberry pi 4, it could be much faster and smoother
If you can provide some more information I'd happily update the guides 😊 (Perhaps jump onto our core electronics forum and do a quick write up on your process)
just dowload and extract this zip file. trust me
Does it work ?
Does it work? And compatible with raspberry pi 4?
Can I use a normal usb camera with this?
Hey this is great, thanks for putting this together. Really easy to follow along as a beginner. Is there a tutorial that builds on this and allows you to connect a speaker to the raspi so that whenever a specific object is detected, it makes a specific noise? Would love to see it!
Such a good idea. Yet to find a project talk about it directly, but where I added the extra code in for the Servo control if you instead replace that with code to set up a speaker and activate it, you would be off to the races.
Here is a related guide on speakers - core-electronics.com.au/tutorials/how-to-use-speakers-and-amplifiers-with-your-project.html
@@Core-Electronics Supertar, thanks!
Amazing, Easy to follow, Comprehensive video for object detection. Gonna use this to turn my RC car into a autonomous vehicle.
Thanks Tim, Keep up the great work :D
Oh man that sounds like an amazing project 😊! Definitely keep me posted on how it goes. The Forum is a great place for a worklog - forum.core-electronics.com.au/
brother i too am working on this project can you leave any leads i am sending you an email if you have time please reply
I got error
Traceback (most recent call last):
File "", line 35
cv2.putText(img,classNames[classId-1].upper(),(box[0] 10,box[1] 30),
SyntaxError: invalid syntax
What mean with this error, i already install cv2
This is a cool, clear, straightforward video. Well done.
Question: does selecting specific objects make the identification faster? for example I only want birds, cats, people to reduce load. Would it work?
Thats a really great question that I am not 100% sure on. My first guess is that you might see a bit of improvement, but I don't think it would be incredibly significant. If you do some of these tests, let us know we are very curious as well!
Thanks for sharing, this is really good and easy to follow
You're a life saviour. Thank you so much ❤
Hi this was a really great project and helped me a lot but can you help in how can we change the size of the box made around our object?
Size of the boxes tend to be based on the size of the detected object. But the Colour and Width of box can definitely be altered. Inside the code look for the section | if (draw): |
Then below that the line | cv2.rectangle(img,box,color=(0,255,0),thickness=2) |
By altering the (0,255,0) numbers you can change the colour of the box. By changing the thickness number you can have very thin lines or very bold lines. Font and other aesthetic changes can be done in the following lines.
@@Core-Electronics Thank you very much
Hello can it be possible if you can join the animal, object and person or facial recognition at the same time? I'm working that kind of project could you help me sir? Please...
Aww what an excellent idea! You will start wanting more powerful hardware very quickly going down this path. Come check out the Oak-D Lite (which is an excellent way to start stacking multiple AI system whilst still using a Raspberry Pi) - ua-cam.com/video/7BkHcJu57Cg/v-deo.html
@@Core-Electronics how about just identifying if it is an animal, things or a person or some kind of moving object and at the same time it will capture a preview picture of it? How can you make this? and also how to create like if the raspberry pi detects a person in can email to you but if it is not a person it will not email you. Hoping you can help me with my research
This video helped a lot! 👍
Sweet! 😊
Your website, products and educational resources are amazing. I was wondering if you had any advice as to how to further train the machine to identify less common objects? I was hoping to use it for a drone video feed and train it to identify people, for basic search and rescue functions. I am a volunteer in my local community, hence my specific question :-)
How can I add a more detections objects like a light bulb on a wall that turns a curtain color? And can I add code to play a sound on a speaker when detection happens?
I have the perfect application for this but the objects I need to identify are very similar and incredibly difficult for experienced humans to see accurately. Would this just mean supplying more training data to the system?
Hi Tim,
Thank you so much on this video for demonstrating how to use OpenCV with the Raspberry Pi.
I am willing to follow along your process to install OpenCV and test it out.
I am just wondering if OpenCV will run on the new Raspberry Pi OS
At this current stage I would recommend using the older 'Buster' OS with this guide. If you want to use Bullseye with machine scripts come check this guide on the OAK-D Lite - core-electronics.com.au/guides/raspberry-pi/oak-d-lite-raspberry-pi/
great video! good for beginner.
I want to get the name of the objects into a string and print it when object detected.
Can you give me any tips or help to me? Thank you so much.
Cheers mate! In the main script underneath the line | result, objectInfo = getObjects(img,0.45,0.2) | is another line stating | #print(objectInfo) |. If you delete that | # | then save and run it again you will be printing the name of the identified object to the shell script.
Hope that helps 😊
Sir! can you use a web cam instead of the original camera of raspberry??
Yep :)
Hii I'm having some problem in my raspberry pi 4 model B while update & upgrade it shows some errors I'm. Going to do object detection on raspberry pi project on my engineering college I have to compete the project before 28 th April so please kindly reply to my comment and pls help me 🙏
Heyya mate, if you are running into errors come check the comment section on the full write up as there is a lot of successful troubleshooting to learn from and you can post some screenshots of your errors there. I'll help get your system running proper 😊
Hey man great video. Any chance you can cover how to use this same concept to detect anomalies instead? Rather than looking for specific objects expected to be there in the camera, the program learns the objects expected to be there and detects when an unusual object is found. Thanks.
Great video, I just came up with an idea for a project using this. I have no experience with Pi's but basically it would be using a camera to detect a squirrel on a bird feeder and then playing some loud noise through a speaker. Would this be a difficult thing to do?
Sounds like an absolutely excellent idea that could definitely be implemented using this kind of Object Detection. We just had a new project posted on our website worth checking out all about using a Raspberry Pi to track Kangaroos and when it does it sends photos of them to a website server - core-electronics.com.au/projects/rooberry-pi
very good video and explanations are well detailed. please I have a project that consists of detecting paper your technique works with other objects but does not work with paper. I don't know if it's possible to teach the system to recognize paper. Thank you
Edge Impulse is your friend here - www.edgeimpulse.com/
This will let you customise already created AI systems like the CoCo Library. Stepping through this system you will be able to modify CoCo library to recognise paper 😊
Thanks for the tutorial. Can you maybe show how to implement a new library? I want it to just detect If there is an animal, the kind doesnt matter.
I've been learning more about this recently. A great way to create custom libraries that a Raspberry Pi can then implement is through Edge Impulse. With this you will be able to train and expand the amount of Animals that default COCO library comes with. Tutorials on this hopefully soon. www.edgeimpulse.com/
@@Core-Electronics Hi Do you have tutorials for Custom Object Detection using your own model?
great video, big help for my thesis. it can be used also to the pest?
Glad to be of help 🙂 not quite sure what you mean though.
Thank you very much for your work!
Is it possible to use any USB camera instead of an official pi camera for this project?
100% any USB webcam can work with this script. You will just need to adjust some code. Likely you will just need to change | cap = cv2.VideoCapture(0) | to | cap = cv2.VideoCapture(1) |. Hope that helps 😊.
@@Core-Electronics thank you! I’ll try this out tomorrow once I am able to and have setup my pi again
Can I use Bullseye?
Hi Tim, I would like to ask how can I speed up the fps and speed up the recognition rate? Or do I need to use the lite version to speed up the speed?
Just perfect, thanks a lot man!
hey tim ! i seem to encounter a problem while following your instructions on the make -j $(nproc) it stops every time on the 40% and i re-typed and entered the same line several times but it didnt work is there any solution thanks for answering
Check the description for the article page. Scroll down to the questions section and you'll find the answer
Will I be able to add an entire category to the list of objects to be displayed in real-time? So instead of saying ['horse'], could I possibly mention a broader category of ['animal'] in the objects parameter? If not, please do let me know the correct way to approach this.
The fastest way would be to just add a long list like ['horse', 'dog', 'elephant'] etc. If you check the full-write up I do something very similar there.
Can this be simulated
Hello
I'm verry happy to see this tuto
Thank for help
Is it possible to detect drugs or pills ?
For sure but you will need to create a custom Machine Learnt Edge system. Come check out Edge Impulse, personally I think they are the best in the game for this kind of stuff (and totally free for a maker) - www.edgeimpulse.com/
Hi.
I wanted to ask, do you think the raspberry pi Zero cam could be used as a substitute? I'm currently working on a project that involved Raspberry Pi's and camera's and have done a lot of research on what hardware to acquire, I haven't seen much benefit in using the V2 camera instead of the Zerocam. I actually think the raspberry pi zero cam has better specs for its price when compared to the V2.
Should work perfectly fine 😊. If the video data is coming into the Raspberry Pi through the ribbon cable I don't think you would even need to change anything in the script.
Hi Great Video! I know this may be unrelated but how about recognition of objects on screen without a camera? Is there any projects you know of that use AI detection to control the cursor of the computer when it detects an object on screen? Cheers
Cheers mate and excellent ideas. You can definitely feed this system data that has been pre-recorded or streamed in from another location, would require some adjustments to the script. Also in regards to AI detection to control a cursor on a Raspberry Pi come have a look at this video - ua-cam.com/video/hLMfcGKXhPM/v-deo.html
can you create a new dataset annotations for a new object and use it with this coco model? Example, I want to detect a soccer ball. Can I just create annotations with something like datatorch and use those annotations in conjunction with the provided model and weights?
How can I fuse this code with the face recognition one?
Hi,
Can I execute this project with a Raspberry Pi 3 A+ ?
You definitely can, it will just run a little bit slower.
hi, this tutorial helped a lot for my project. i successfully set up and run the codes on raspberry 4 model b terminal, i just couldn't figure out how can i see the video output while the code is running on the terminal (not on geany or thonny). maybe u could help me out :>>
Not quite sure why it wouldn't do that for you when you run the script in the terminal. Come write up a post here at our forum and post some pictures of the situation - forum.core-electronics.com.au/. Reference in the post me and I'll best be able to help 😊
You're the most closet project of my idea in fact it's practically that.
But I would like to run it 7/24 during a 10 day period ( my holiday)
I would like it press a button 10 minutes after each time it identify a cat (mine) and nothing else :
Here is a cat :
wait 10 minute
press the smart button ( I looking for a way to flush the toilet each time after my cats have done their needs )
is this possible/faisable with this?
Definitely possible and an excellent project to eliminate a chore 😊 or make for an even more in-dependent kitty. The Coco Library used in this guide has | Cat | as one of the animals it can identify. And Raspberry Pi's are excellent at running 24/7. So I reckon your in for a project winner.
If you follow through the Full Write up you'll be able to have a system that can Identify Cats (and only cats). That the hard bit done. Solenoids are a way to trigger the button, check this guide for the process on getting one to run with Raspberry Pi - core-electronics.com.au/guides/solenoid-control-with-raspberry-pi-relay/
Thank you man! This was really helpful.
Would this program be able to email somebody about what object is seeing, like instead of turning the servo email somebody?
Absolutely! Here is a straight forward code to send an email through a Python Script. If you merge those two lands together you'll be smooth sailing - raspberrypi-guide.github.io/programming/send-email-notifications#:~:text=Sending%20an%20email%20from%20Python,-Okay%20now%20we&text=import%20yagmail%20%23%20start%20a%20connection,(%22Email%20sent!%22)
Excuse me, i need help please, tiny yolo is better to raspberry pi or normal yolo can be used?
Great video, thank you for sharing.
Amazing , thank you !
Our pleasure!😊😊
You are a legend bro
i have a question what if when it detects particular image in my case (garbage) it has to generate a gps location or it has to send the location of that point to another vehicle like you did to your servo motor
Thanks
Thanks for this. I want to use my pi to do custom recognition of trees from their bark in a portable field unit. I already tried an tensorflow lite and off the shelf database to do common object recognition.
If I had a small need to recognize say 50 trees, how many labelled images do I need of each tree for the training data?
Hi Charles, some Australian scientists concluded in a 2020 paper “How many images do I need?” (Saleh Shahinfar, et al) that the minimum number of data points for a class should be in the 150 - 500 range. So if you had 50 species of trees to identify from you'd need roughly between 7,500 - 25,000 images/data points.
@@Core-Electronics thanks so much for this info. I have to get to work! I’m checking out the paper.
ohhh man where are , i spent a week trying to install lib's thank u sooooo much
Upload a video only motion tracking and shooting tracing object 👍
I am getting cv2.imshow error while running object-ident.py in pi terminal, I connected pi via ssh. What should I do?
Very clever doing it through SSH 😊. It shouldn't be an issue doing it that way so long as you go through all the set up process. If you come write me a message on the core electronics forum under this topic I'll best be able to help you. That way you can sent through screen grabs of your terminal command errors.
can Camera understand where is the object, and tell the raspberry there is cup on the left side or right?
It absolutely can. Just requires a little bit of coding 😊
Hi, thank you for the explanation and code. I tried the code with the V3 HD camera, but it didn't work. Additionally, can you tell me how to create an autostart for this design? The 5 ways to autostart don't work ("Output:957): Gtk-WARNING **: 19:31:41.632: cannot open display:"). I'm sending a relay with it to keep the chickens away from the terrace with a water jet. Beautiful design! Greetings, Luc.
Hey Luc,
To start you will need to update a new driver for the V3 Camera so it can work with the older 'Buster' Raspberry Pi OS. Check out how to do it here - forum.arducam.com/t/16mp-autofocus-raspbian-buster-no-camera-available/2464 -
And if you want to autostart your system come check out how here (I would use CronTab) - www.tomshardware.com/how-to/run-script-at-boot-raspberry-pi
Come pop to our forum if you need any more help 😊 forum.core-electronics.com.au/latest
Kind regards,
Tim
instead of raspberry pi 4 can we use a raspberry pi zero 2w if the speed doesn't matter to me?
The raspberry pi 5 with AI kit is pretty slick I just need to get better identification.
We are very excited over here for the AI kit as well! Not the most powerful chip, but performance per dollar and Watt is quite respectable.
@@Core-Electronics I ordered mine the day it was announced and have been running it nonstop a few days now just with a demo detection running just to see how it goes.
Nicely explain. Please do I apply this on new dataset different from this?
It will require some dedicated effort but you can customise this object detection dataset using edge impulse. www.edgeimpulse.com/
That way you can add whatever object or creature you'd like 😊 I hope I understood correctly.
How do you transfer a data set to the pi do you store it in a file or does it need adding to the code.
Hi is there any option to tracking the QR code with pan and tilt module?
thank youu veryy muchh🙇
❤😍
Hi core electronics, I am looking for a lens for my Raspberry Pi HQ camera module... I want good quality image and a closer view for defect detection for my FFF 3D printed parts...can you suggest some lenses. Thanks
There is a microscope lens that might be suitable for looking at 3D print defects. Give that a look. core-electronics.com.au/microscope-lens-for-the-raspberry-pi-high-quality-camera-0-12-1-8x.html
What if I want to send a string to a receiver when it detects a certain class?
Amazing sir how can add speak module beacause easily to understand to detect of any object and after to speak a text in any objects
Can I run your project on MacBook if possible and in this case what kind of modifications to have with hardware please. Thanks
Hi Tim, the video was great btw do you know another dataset that i could use with this code, and can you explain how to train a new object to detect?
Hi, How can I run the object detection even without connection to laptop or manually running the code.
You got heaps of different options. For example, you could run the script automatically every time the Raspberry Pi boots (using Cron Jobs, check here for a guide - ua-cam.com/video/rErAOjACT6w/v-deo.html) or you could run the code remotely using your phone (check here - core-electronics.com.au/tutorials/raspcontrol-raspberry-pi.html)
cool, but what would it take to make this work with 60 fps (doing the image recognition in every frame and not lagging behind when things move fast)
Great video! Can you run this portable on a battery not connected to the internet?
All the processing is done on the edge, thus you only need the hardware (no calculations happen over Wifi or via the Cloud). So if you had a big enough battery you could definitely run this system via a battery without Internet 😊.
Thanks for the quick reply!
After watching this, I have an urge to train one of these to identify the difference between male and female whitetail deer for a game camera....
That would be ultra rad!
If you want to keep those deer in frame the whole time perhaps an automatic Machine Learned tracking system would help 😊 something like this core-electronics.com.au/guides/Face-Tracking-Raspberry-Pi/
Hey Core Electronics! Can I make it detect pistols only?
Give Edge Impulse a look at. This library doesn't have that as an object but you can use Edge Impulse to train/modify the standard COCO library to include new objects and things.
Can I connect the detected images to Google lens url ..could you please help me with the code
Im sure you can. Come pop us a forum post here and we'll get the best people to help you - forum.core-electronics.com.au/
how much time it will take after make -j $(nproc). Because on my side, after 3 hour my system reboot automatically. Help me out in this situation
3 Hours is definitely too long for installation! Come jump into the full written up article. At the bottom is a whole bunch of successful troubleshooting that you utilise.
@@Core-Electronics Thanks for Reply. I did that successfully. Thanks for your help.
One more thing, I want to connect multiple camera with Raspberry pi via GPIO. Is it possible? Can you help me in that?
Can you change the sd card to cloud storage??
Hello, great video but how do I get the coordinates of the tracked objects I am trying to build a robot that can identify and pick up objects, how would I find the coordinates
I want to get output when alghoritm recognize an animal. I want to send this output to firebase. What am i suppose to do ?
is there any way make more video capture opencv speeed up?
Hi is there a way that then creates a log with all recognized animals/humans so data can be consumed ?
Would this work to detect pigeons out of the box or would it need training?
I believe the model we used for this tutorial is capable of identifying birds, I'm unsure if this it is trained specifically to differentiate between pigeons and other birds though. A custom CV model better suited to bird identification would be more reliable.
I have a imx219 apparently it will not work with opencv. Is there a way to use gstreamer to make it work in open cv?
For anyone it might help out later. I followed the commands on the guide verbatim and was having issues on the, "cmake -D CMAKE_BUILD_TYPE=RELEASE \" and 4 following commands that are all grouped together. I was using, "Right click highlighted text - copy, from the web page and CTRL+Shift+V paste to Terminal to input commands. And that worked great for most of the commands but doesn't appear to work for that last paragraph. I had to manually type it in myself in order for it to work correctly.
Tim if you do read this, first of all thanks. But I am a tad lost on exactly when to change back the CONF_SWAPSIZE to 100. I assume after the installation is fully complete but to some of us noobs, its a bit unclear I guess. Also, don't know exactly why, but it says that, "sudo pip3 install numpy" already had its bits installed to Buster so it "might" be redundant. Unless its more of a full proof guild for other versions of the OS.
Finally able to finish up my project! Once this finishes installing...
:P
Cheers for this write up mate 🙂 I'll legit jump into the guide and make it a little clearer when to swap back the CONF_SWAPSIZE. I'll make it more similar to what I have in my Face Recognition written up guide. My intention is to 'Noob proof' it as best as I can so everyone can have Open-Source Machine learned systems in the palm of their hands that they've created themselves.
Very glad you now have it all up and running too!
@@Core-Electronics I didn't think to check your guild on the other OpenCV videos. I'll go do that now. Finally have the next 2 days off so I can fully jump into it.
I got past this issue, but now when I run the script its having issues no one else in the forums had. I assume this was due to some issues I may have made when trying to get the multi line command working. Idk so lost with all of this lol.
I'm good with Arduino, but Raz Pi, Linux, console commands and scripts vs coding. So much to learn at once. You are such a huge help while lost and overwhelmed in this new little world.
Hi Great video! can I use a usb webcam instead of the pi cam, is it just a case of changing the code
Hey tim! Here's a question, Is the model trained by your coco generated by the yolo algorithm? This is related to the writing of my graduation thesis. I will be more grateful if you can provide more suggestions.
Sorry for getting to this so late. A lot can be learned here - cocodataset.org/ . Also there are a ton of research papers as people are unraveling this technology that are worth exploring (or adding to the bottom of a graduation thesis). Good luck mate!
@@Core-Electronics thanks so much!I believe that with your help I can get a high score.best wish!
Hi Tim, can the coral accelerator be integrated in this project?
Absolutely
do you have any guides for using an ultra low light camera module such as Arducam B0333 camera module (Sony Starvis IMX462 sensor)
Great video !!
code zip file is not available at the bottom of your article posted.Will you please upload it?
Code should be available at the bottom of article or in the comment section. If you can't see it pop me a reply and we'll figure out whats happening.
Hey great video, May I know where to tinker if i will be using esp32 camera to stream the video? Thank you in advance!
Hey mate cheers 🙂 the line to alter in code is | cap = cv2.VideoCapture(0) | changing that 0 to another index number that will represent your esp32 camera stream. Come make a forum post if you need any extra hand.
@@Core-Electronics Hi would like to some extra hands on this one. How can I implement esp32 cam as my video stream for real time object detection using the code. Thankss!
Definitely a great question for our Core Electronics Forum 😊
Can this be performed on a Raspberry Pi 3B+?
Yes but there will be a slight delay in the video feed.
Awesome vid, clear fast and accurate 🌟
hi is this usable for pest detection?
Ah I see now, depends on the pest. If your interested in large pests like possums, rats, skunks, baboons or the like then this could be useful. Smaller critters like gross bugs likely not. Unless you had some doorway to the outside where you could watch the bugs come in from and you had a camera up really close.
Is the OpenCV for RPI can be use to RPI 4 with 2gb RAM?
Absolutely sorted mate, If your using a Raspberry Pi 4 then you'll be good to go 😊
Hello I am trying to create a design that will recognize different trash types. Does this image recognition able to perceive things like cardboard, paper, tissue, or silver foil as such? like trash items?
Hey Max, Im currently working on a very similar project. My workshop can get a bit messy so I am setting it up to scream at me when it gets untidy. I will report back to you how it goes, or if you've had some luck I'd be more than interested.
Cheers!