Object Identification & Animal Recognition With Raspberry Pi + OpenCV + Python

Поділитися
Вставка
  • Опубліковано 4 жов 2024

КОМЕНТАРІ • 329

  • @Core-Electronics
    @Core-Electronics  5 днів тому

    Hey everyone! We have a new updated version of this guide that uses a more advanced model and runs a bit smoother. You can check it out here: ua-cam.com/video/XKIm_R_rIeQ/v-deo.html
    Please note that we are keeping this old guide up for legacy reasons and that it requires the older Buster OS (the new one is running on the new Bookworm OS).

  • @mike0rr
    @mike0rr 2 роки тому +31

    This was the fastest, cleanest comprehensive guide I have found on OpenCV for Pi.
    Only thing that would make this better would be an Install script, but even then I think its good for some manual work to be left anyways. Get peoples hands dirty and force them to explore and learn more.
    So cool to have the power of machine learning and Computer Vision in our hands to explore and experiment with. What a time to be alive!

    • @Core-Electronics
      @Core-Electronics  2 роки тому +5

      Very glad you have your system all up and running 🙂 and I absolutely agree. Something about a machine learned system that runs on a palm-sized computer that you have put together yourself really feels like magic ✨✨

  • @maxxgraphix
    @maxxgraphix Рік тому +5

    To use a USB cam install fswebcam then change cv2.VideoCapture(0) to cv2.VideoCapture(0, cv2.CAP_V4L2) in the script

  • @soulo6661
    @soulo6661 2 роки тому +5

    Trust me . I just find everything I was looking for about my raspberry pi 🌹

  • @stevenhillman6376
    @stevenhillman6376 Рік тому +2

    Excellent. I came to this after seeing the facial recognition video as it would help with a project I have in mind. However, after seeing this and how easy it is to set up and use my project will be more ambitious. Thanks again and keep up the good work.

  • @yukiyavalentine5867
    @yukiyavalentine5867 8 днів тому

    thank you, great preview on how to get started !

  • @jacksonpark5001
    @jacksonpark5001 2 роки тому +2

    this was exactly the thing i was looking for. i will be buying things from their store as compensation!

  • @biancaar8032
    @biancaar8032 Рік тому +2

    And a really big thanks to you for explaining this so well😁😁

  • @daniiltimin5396
    @daniiltimin5396 Рік тому +1

    Lost two nights trying to run it on the latest OS! Use the previous one, it is mentioned in the article.

    • @specterstrider186
      @specterstrider186 11 місяців тому +1

      thank you, I was struggling with this and was utterly confused.

  • @nishyu9101
    @nishyu9101 Рік тому +1

    This is amazing ! this is soo very cool! Thank you for introducing me to coco!

  • @FlyWithSergio
    @FlyWithSergio 2 місяці тому +1

    Thank you VERY much!

  • @sku1196
    @sku1196 2 роки тому +3

    Hey tim! I successfully have managed to run this project in about an hour. I didn't compile opencv from source though. I installed it through pip but still got it working and its running pretty smooth. Hope you could change the opencv compiling part as it takes tooo long (took me 3 days and was still unsuccessful)and is unnecessary. Thank you
    I have used the raspberry 3b+
    If you use raspberry pi 4, it could be much faster and smoother

    • @Core-Electronics
      @Core-Electronics  2 роки тому +1

      If you can provide some more information I'd happily update the guides 😊 (Perhaps jump onto our core electronics forum and do a quick write up on your process)

  • @FUKTxProductions
    @FUKTxProductions Рік тому +23

    just dowload and extract this zip file. trust me

  • @mattclagett778
    @mattclagett778 7 місяців тому +1

    Can I use a normal usb camera with this?

  • @marnierogers3931
    @marnierogers3931 3 роки тому +3

    Hey this is great, thanks for putting this together. Really easy to follow along as a beginner. Is there a tutorial that builds on this and allows you to connect a speaker to the raspi so that whenever a specific object is detected, it makes a specific noise? Would love to see it!

    • @Core-Electronics
      @Core-Electronics  3 роки тому +2

      Such a good idea. Yet to find a project talk about it directly, but where I added the extra code in for the Servo control if you instead replace that with code to set up a speaker and activate it, you would be off to the races.
      Here is a related guide on speakers - core-electronics.com.au/tutorials/how-to-use-speakers-and-amplifiers-with-your-project.html

    • @marnierogers3931
      @marnierogers3931 3 роки тому +1

      @@Core-Electronics Supertar, thanks!

  • @thezmanner7478
    @thezmanner7478 2 роки тому +1

    Amazing, Easy to follow, Comprehensive video for object detection. Gonna use this to turn my RC car into a autonomous vehicle.
    Thanks Tim, Keep up the great work :D

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Oh man that sounds like an amazing project 😊! Definitely keep me posted on how it goes. The Forum is a great place for a worklog - forum.core-electronics.com.au/

    • @uzairsiyal-b9p
      @uzairsiyal-b9p 10 місяців тому

      brother i too am working on this project can you leave any leads i am sending you an email if you have time please reply

  • @rizkylevy8154
    @rizkylevy8154 Рік тому +2

    I got error
    Traceback (most recent call last):
    File "", line 35
    cv2.putText(img,classNames[classId-1].upper(),(box[0] 10,box[1] 30),
    SyntaxError: invalid syntax
    What mean with this error, i already install cv2

  • @olafmarzocchi6194
    @olafmarzocchi6194 2 місяці тому

    This is a cool, clear, straightforward video. Well done.
    Question: does selecting specific objects make the identification faster? for example I only want birds, cats, people to reduce load. Would it work?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      Thats a really great question that I am not 100% sure on. My first guess is that you might see a bit of improvement, but I don't think it would be incredibly significant. If you do some of these tests, let us know we are very curious as well!

  • @stefanosbek
    @stefanosbek 2 роки тому +1

    Thanks for sharing, this is really good and easy to follow

  • @joelbay1468
    @joelbay1468 6 місяців тому

    You're a life saviour. Thank you so much ❤

  • @aadigupta4252
    @aadigupta4252 2 роки тому +2

    Hi this was a really great project and helped me a lot but can you help in how can we change the size of the box made around our object?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Size of the boxes tend to be based on the size of the detected object. But the Colour and Width of box can definitely be altered. Inside the code look for the section | if (draw): |
      Then below that the line | cv2.rectangle(img,box,color=(0,255,0),thickness=2) |
      By altering the (0,255,0) numbers you can change the colour of the box. By changing the thickness number you can have very thin lines or very bold lines. Font and other aesthetic changes can be done in the following lines.

    • @aadigupta4252
      @aadigupta4252 2 роки тому +1

      @@Core-Electronics Thank you very much

  • @diannevila8837
    @diannevila8837 Рік тому +2

    Hello can it be possible if you can join the animal, object and person or facial recognition at the same time? I'm working that kind of project could you help me sir? Please...

    • @Core-Electronics
      @Core-Electronics  Рік тому

      Aww what an excellent idea! You will start wanting more powerful hardware very quickly going down this path. Come check out the Oak-D Lite (which is an excellent way to start stacking multiple AI system whilst still using a Raspberry Pi) - ua-cam.com/video/7BkHcJu57Cg/v-deo.html

    • @diannevila8837
      @diannevila8837 Рік тому

      @@Core-Electronics how about just identifying if it is an animal, things or a person or some kind of moving object and at the same time it will capture a preview picture of it? How can you make this? and also how to create like if the raspberry pi detects a person in can email to you but if it is not a person it will not email you. Hoping you can help me with my research

  • @lordergame6147
    @lordergame6147 2 роки тому +2

    This video helped a lot! 👍

  • @mark-il8oo
    @mark-il8oo Рік тому +2

    Your website, products and educational resources are amazing. I was wondering if you had any advice as to how to further train the machine to identify less common objects? I was hoping to use it for a drone video feed and train it to identify people, for basic search and rescue functions. I am a volunteer in my local community, hence my specific question :-)

  • @Desenrad
    @Desenrad 5 днів тому

    How can I add a more detections objects like a light bulb on a wall that turns a curtain color? And can I add code to play a sound on a speaker when detection happens?

  • @SKWDiesel1
    @SKWDiesel1 2 місяці тому

    I have the perfect application for this but the objects I need to identify are very similar and incredibly difficult for experienced humans to see accurately. Would this just mean supplying more training data to the system?

  • @suryanarayansanthakumar3528
    @suryanarayansanthakumar3528 2 роки тому +1

    Hi Tim,
    Thank you so much on this video for demonstrating how to use OpenCV with the Raspberry Pi.
    I am willing to follow along your process to install OpenCV and test it out.
    I am just wondering if OpenCV will run on the new Raspberry Pi OS

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      At this current stage I would recommend using the older 'Buster' OS with this guide. If you want to use Bullseye with machine scripts come check this guide on the OAK-D Lite - core-electronics.com.au/guides/raspberry-pi/oak-d-lite-raspberry-pi/

  • @gary_0617
    @gary_0617 Рік тому +1

    great video! good for beginner.
    I want to get the name of the objects into a string and print it when object detected.
    Can you give me any tips or help to me? Thank you so much.

    • @Core-Electronics
      @Core-Electronics  Рік тому

      Cheers mate! In the main script underneath the line | result, objectInfo = getObjects(img,0.45,0.2) | is another line stating | #print(objectInfo) |. If you delete that | # | then save and run it again you will be printing the name of the identified object to the shell script.
      Hope that helps 😊

  • @Jianned-arc
    @Jianned-arc 11 місяців тому +1

    Sir! can you use a web cam instead of the original camera of raspberry??

  • @guruvasan
    @guruvasan 2 роки тому +2

    Hii I'm having some problem in my raspberry pi 4 model B while update & upgrade it shows some errors I'm. Going to do object detection on raspberry pi project on my engineering college I have to compete the project before 28 th April so please kindly reply to my comment and pls help me 🙏

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Heyya mate, if you are running into errors come check the comment section on the full write up as there is a lot of successful troubleshooting to learn from and you can post some screenshots of your errors there. I'll help get your system running proper 😊

  • @elvarzz
    @elvarzz 11 місяців тому +1

    Hey man great video. Any chance you can cover how to use this same concept to detect anomalies instead? Rather than looking for specific objects expected to be there in the camera, the program learns the objects expected to be there and detects when an unusual object is found. Thanks.

  • @michaelauth8936
    @michaelauth8936 2 роки тому +2

    Great video, I just came up with an idea for a project using this. I have no experience with Pi's but basically it would be using a camera to detect a squirrel on a bird feeder and then playing some loud noise through a speaker. Would this be a difficult thing to do?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Sounds like an absolutely excellent idea that could definitely be implemented using this kind of Object Detection. We just had a new project posted on our website worth checking out all about using a Raspberry Pi to track Kangaroos and when it does it sends photos of them to a website server - core-electronics.com.au/projects/rooberry-pi

  • @roblaicekameni8273
    @roblaicekameni8273 2 роки тому +1

    very good video and explanations are well detailed. please I have a project that consists of detecting paper your technique works with other objects but does not work with paper. I don't know if it's possible to teach the system to recognize paper. Thank you

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Edge Impulse is your friend here - www.edgeimpulse.com/
      This will let you customise already created AI systems like the CoCo Library. Stepping through this system you will be able to modify CoCo library to recognise paper 😊

  • @lukasscheunemann4059
    @lukasscheunemann4059 2 роки тому +1

    Thanks for the tutorial. Can you maybe show how to implement a new library? I want it to just detect If there is an animal, the kind doesnt matter.

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      I've been learning more about this recently. A great way to create custom libraries that a Raspberry Pi can then implement is through Edge Impulse. With this you will be able to train and expand the amount of Animals that default COCO library comes with. Tutorials on this hopefully soon. www.edgeimpulse.com/

    • @xyliusdominicibayan6215
      @xyliusdominicibayan6215 2 роки тому

      @@Core-Electronics Hi Do you have tutorials for Custom Object Detection using your own model?

  • @maritesdespares4112
    @maritesdespares4112 2 роки тому +1

    great video, big help for my thesis. it can be used also to the pest?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Glad to be of help 🙂 not quite sure what you mean though.

  • @muhammadumarsotvoldiev8768
    @muhammadumarsotvoldiev8768 Рік тому +1

    Thank you very much for your work!

  • @Old_SDC
    @Old_SDC 2 роки тому +2

    Is it possible to use any USB camera instead of an official pi camera for this project?

    • @Core-Electronics
      @Core-Electronics  Рік тому +2

      100% any USB webcam can work with this script. You will just need to adjust some code. Likely you will just need to change | cap = cv2.VideoCapture(0) | to | cap = cv2.VideoCapture(1) |. Hope that helps 😊.

    • @Old_SDC
      @Old_SDC Рік тому +2

      @@Core-Electronics thank you! I’ll try this out tomorrow once I am able to and have setup my pi again

    • @David-pp9th
      @David-pp9th Рік тому

      Can I use Bullseye?

  • @王文瑄-k3b
    @王文瑄-k3b Рік тому +2

    Hi Tim, I would like to ask how can I speed up the fps and speed up the recognition rate? Or do I need to use the lite version to speed up the speed?

  • @p.b.9515
    @p.b.9515 Рік тому +1

    Just perfect, thanks a lot man!

  • @ChadhaBenyoussef
    @ChadhaBenyoussef Рік тому +1

    hey tim ! i seem to encounter a problem while following your instructions on the make -j $(nproc) it stops every time on the 40% and i re-typed and entered the same line several times but it didnt work is there any solution thanks for answering

    • @timgivney
      @timgivney 11 місяців тому

      Check the description for the article page. Scroll down to the questions section and you'll find the answer

  • @sanjaysuresh743
    @sanjaysuresh743 2 роки тому +1

    Will I be able to add an entire category to the list of objects to be displayed in real-time? So instead of saying ['horse'], could I possibly mention a broader category of ['animal'] in the objects parameter? If not, please do let me know the correct way to approach this.

    • @Core-Electronics
      @Core-Electronics  2 роки тому +1

      The fastest way would be to just add a long list like ['horse', 'dog', 'elephant'] etc. If you check the full-write up I do something very similar there.

  • @wali7862
    @wali7862 5 місяців тому +1

    Can this be simulated

  • @oumargbadamassi7864
    @oumargbadamassi7864 Рік тому +1

    Hello
    I'm verry happy to see this tuto
    Thank for help
    Is it possible to detect drugs or pills ?

    • @Core-Electronics
      @Core-Electronics  Рік тому

      For sure but you will need to create a custom Machine Learnt Edge system. Come check out Edge Impulse, personally I think they are the best in the game for this kind of stuff (and totally free for a maker) - www.edgeimpulse.com/

  • @zakashii
    @zakashii 2 роки тому +2

    Hi.
    I wanted to ask, do you think the raspberry pi Zero cam could be used as a substitute? I'm currently working on a project that involved Raspberry Pi's and camera's and have done a lot of research on what hardware to acquire, I haven't seen much benefit in using the V2 camera instead of the Zerocam. I actually think the raspberry pi zero cam has better specs for its price when compared to the V2.

    • @Core-Electronics
      @Core-Electronics  2 роки тому +1

      Should work perfectly fine 😊. If the video data is coming into the Raspberry Pi through the ribbon cable I don't think you would even need to change anything in the script.

  • @Catge
    @Catge 2 роки тому +3

    Hi Great Video! I know this may be unrelated but how about recognition of objects on screen without a camera? Is there any projects you know of that use AI detection to control the cursor of the computer when it detects an object on screen? Cheers

    • @Core-Electronics
      @Core-Electronics  2 роки тому +1

      Cheers mate and excellent ideas. You can definitely feed this system data that has been pre-recorded or streamed in from another location, would require some adjustments to the script. Also in regards to AI detection to control a cursor on a Raspberry Pi come have a look at this video - ua-cam.com/video/hLMfcGKXhPM/v-deo.html

  • @nacholibre1465
    @nacholibre1465 Місяць тому

    can you create a new dataset annotations for a new object and use it with this coco model? Example, I want to detect a soccer ball. Can I just create annotations with something like datatorch and use those annotations in conjunction with the provided model and weights?

  • @igval2982
    @igval2982 Рік тому +1

    How can I fuse this code with the face recognition one?

  • @nithins9640
    @nithins9640 Рік тому +2

    Hi,
    Can I execute this project with a Raspberry Pi 3 A+ ?

    • @Core-Electronics
      @Core-Electronics  Рік тому

      You definitely can, it will just run a little bit slower.

  • @missmickey
    @missmickey Рік тому +1

    hi, this tutorial helped a lot for my project. i successfully set up and run the codes on raspberry 4 model b terminal, i just couldn't figure out how can i see the video output while the code is running on the terminal (not on geany or thonny). maybe u could help me out :>>

    • @Core-Electronics
      @Core-Electronics  Рік тому +1

      Not quite sure why it wouldn't do that for you when you run the script in the terminal. Come write up a post here at our forum and post some pictures of the situation - forum.core-electronics.com.au/. Reference in the post me and I'll best be able to help 😊

  • @TakeElite
    @TakeElite 2 роки тому +1

    You're the most closet project of my idea in fact it's practically that.
    But I would like to run it 7/24 during a 10 day period ( my holiday)
    I would like it press a button 10 minutes after each time it identify a cat (mine) and nothing else :
    Here is a cat :
    wait 10 minute
    press the smart button ( I looking for a way to flush the toilet each time after my cats have done their needs )
    is this possible/faisable with this?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Definitely possible and an excellent project to eliminate a chore 😊 or make for an even more in-dependent kitty. The Coco Library used in this guide has | Cat | as one of the animals it can identify. And Raspberry Pi's are excellent at running 24/7. So I reckon your in for a project winner.
      If you follow through the Full Write up you'll be able to have a system that can Identify Cats (and only cats). That the hard bit done. Solenoids are a way to trigger the button, check this guide for the process on getting one to run with Raspberry Pi - core-electronics.com.au/guides/solenoid-control-with-raspberry-pi-relay/

  • @ashanperera5169
    @ashanperera5169 6 місяців тому

    Thank you man! This was really helpful.

  • @xavierdawkins920
    @xavierdawkins920 Рік тому +1

    Would this program be able to email somebody about what object is seeing, like instead of turning the servo email somebody?

    • @Core-Electronics
      @Core-Electronics  Рік тому

      Absolutely! Here is a straight forward code to send an email through a Python Script. If you merge those two lands together you'll be smooth sailing - raspberrypi-guide.github.io/programming/send-email-notifications#:~:text=Sending%20an%20email%20from%20Python,-Okay%20now%20we&text=import%20yagmail%20%23%20start%20a%20connection,(%22Email%20sent!%22)

  • @dominicroman5038
    @dominicroman5038 5 місяців тому

    Excuse me, i need help please, tiny yolo is better to raspberry pi or normal yolo can be used?

  • @suheladesilva2933
    @suheladesilva2933 7 місяців тому

    Great video, thank you for sharing.

  • @fradioumayma7919
    @fradioumayma7919 Рік тому +1

    Amazing , thank you !

  • @uzairsiyal-b9p
    @uzairsiyal-b9p 10 місяців тому

    You are a legend bro
    i have a question what if when it detects particular image in my case (garbage) it has to generate a gps location or it has to send the location of that point to another vehicle like you did to your servo motor

  • @jeevanlalchauhan8222
    @jeevanlalchauhan8222 День тому

    Thanks

  • @charlesblithfield6182
    @charlesblithfield6182 Рік тому +1

    Thanks for this. I want to use my pi to do custom recognition of trees from their bark in a portable field unit. I already tried an tensorflow lite and off the shelf database to do common object recognition.
    If I had a small need to recognize say 50 trees, how many labelled images do I need of each tree for the training data?

    • @Core-Electronics
      @Core-Electronics  Рік тому

      Hi Charles, some Australian scientists concluded in a 2020 paper “How many images do I need?” (Saleh Shahinfar, et al) that the minimum number of data points for a class should be in the 150 - 500 range. So if you had 50 species of trees to identify from you'd need roughly between 7,500 - 25,000 images/data points.

    • @charlesblithfield6182
      @charlesblithfield6182 Рік тому +1

      @@Core-Electronics thanks so much for this info. I have to get to work! I’m checking out the paper.

  • @Yazeed__Almutairi2
    @Yazeed__Almutairi2 4 місяці тому

    ohhh man where are , i spent a week trying to install lib's thank u sooooo much

  • @Tetrax-lt8is
    @Tetrax-lt8is 4 місяці тому

    Upload a video only motion tracking and shooting tracing object 👍

  • @meghap5221
    @meghap5221 2 роки тому +1

    I am getting cv2.imshow error while running object-ident.py in pi terminal, I connected pi via ssh. What should I do?

    • @Core-Electronics
      @Core-Electronics  2 роки тому +1

      Very clever doing it through SSH 😊. It shouldn't be an issue doing it that way so long as you go through all the set up process. If you come write me a message on the core electronics forum under this topic I'll best be able to help you. That way you can sent through screen grabs of your terminal command errors.

  • @abdulrhamanalkinani1955
    @abdulrhamanalkinani1955 2 роки тому +1

    can Camera understand where is the object, and tell the raspberry there is cup on the left side or right?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      It absolutely can. Just requires a little bit of coding 😊

  • @jonathanboot
    @jonathanboot Рік тому +1

    Hi, thank you for the explanation and code. I tried the code with the V3 HD camera, but it didn't work. Additionally, can you tell me how to create an autostart for this design? The 5 ways to autostart don't work ("Output:957): Gtk-WARNING **: 19:31:41.632: cannot open display:"). I'm sending a relay with it to keep the chickens away from the terrace with a water jet. Beautiful design! Greetings, Luc.

    • @Core-Electronics
      @Core-Electronics  Рік тому

      Hey Luc,
      To start you will need to update a new driver for the V3 Camera so it can work with the older 'Buster' Raspberry Pi OS. Check out how to do it here - forum.arducam.com/t/16mp-autofocus-raspbian-buster-no-camera-available/2464 -
      And if you want to autostart your system come check out how here (I would use CronTab) - www.tomshardware.com/how-to/run-script-at-boot-raspberry-pi
      Come pop to our forum if you need any more help 😊 forum.core-electronics.com.au/latest
      Kind regards,
      Tim

  • @Dhanu-bc8pn
    @Dhanu-bc8pn 11 місяців тому

    instead of raspberry pi 4 can we use a raspberry pi zero 2w if the speed doesn't matter to me?

  • @armisis
    @armisis 3 місяці тому

    The raspberry pi 5 with AI kit is pretty slick I just need to get better identification.

    • @Core-Electronics
      @Core-Electronics  3 місяці тому +1

      We are very excited over here for the AI kit as well! Not the most powerful chip, but performance per dollar and Watt is quite respectable.

    • @armisis
      @armisis 3 місяці тому

      @@Core-Electronics I ordered mine the day it was announced and have been running it nonstop a few days now just with a demo detection running just to see how it goes.

  • @bellooluwaseyi4193
    @bellooluwaseyi4193 Рік тому +1

    Nicely explain. Please do I apply this on new dataset different from this?

    • @Core-Electronics
      @Core-Electronics  Рік тому

      It will require some dedicated effort but you can customise this object detection dataset using edge impulse. www.edgeimpulse.com/
      That way you can add whatever object or creature you'd like 😊 I hope I understood correctly.

    • @andyturner1502
      @andyturner1502 9 місяців тому

      How do you transfer a data set to the pi do you store it in a file or does it need adding to the code.

  • @justinvarghese2010
    @justinvarghese2010 Місяць тому

    Hi is there any option to tracking the QR code with pan and tilt module?

  • @Seii__
    @Seii__ Рік тому +1

    thank youu veryy muchh🙇

  • @beyond_desi7719
    @beyond_desi7719 4 місяці тому

    Hi core electronics, I am looking for a lens for my Raspberry Pi HQ camera module... I want good quality image and a closer view for defect detection for my FFF 3D printed parts...can you suggest some lenses. Thanks

    • @Core-Electronics
      @Core-Electronics  4 місяці тому

      There is a microscope lens that might be suitable for looking at 3D print defects. Give that a look. core-electronics.com.au/microscope-lens-for-the-raspberry-pi-high-quality-camera-0-12-1-8x.html

  • @acenuisa
    @acenuisa 4 місяці тому

    What if I want to send a string to a receiver when it detects a certain class?

  • @UsamaRiaz-yf5jk
    @UsamaRiaz-yf5jk Рік тому

    Amazing sir how can add speak module beacause easily to understand to detect of any object and after to speak a text in any objects

  • @farisk9119
    @farisk9119 10 місяців тому

    Can I run your project on MacBook if possible and in this case what kind of modifications to have with hardware please. Thanks

  • @bosss6053
    @bosss6053 Рік тому +1

    Hi Tim, the video was great btw do you know another dataset that i could use with this code, and can you explain how to train a new object to detect?

  • @xyliusdominicibayan6215
    @xyliusdominicibayan6215 2 роки тому +1

    Hi, How can I run the object detection even without connection to laptop or manually running the code.

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      You got heaps of different options. For example, you could run the script automatically every time the Raspberry Pi boots (using Cron Jobs, check here for a guide - ua-cam.com/video/rErAOjACT6w/v-deo.html) or you could run the code remotely using your phone (check here - core-electronics.com.au/tutorials/raspcontrol-raspberry-pi.html)

  • @_zsebtelep8502
    @_zsebtelep8502 Рік тому

    cool, but what would it take to make this work with 60 fps (doing the image recognition in every frame and not lagging behind when things move fast)

  • @JohnnyJiuJitsu
    @JohnnyJiuJitsu 2 роки тому +1

    Great video! Can you run this portable on a battery not connected to the internet?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      All the processing is done on the edge, thus you only need the hardware (no calculations happen over Wifi or via the Cloud). So if you had a big enough battery you could definitely run this system via a battery without Internet 😊.

    • @JohnnyJiuJitsu
      @JohnnyJiuJitsu 2 роки тому +1

      Thanks for the quick reply!

  • @Isaacmantx
    @Isaacmantx 2 роки тому +1

    After watching this, I have an urge to train one of these to identify the difference between male and female whitetail deer for a game camera....

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      That would be ultra rad!

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      If you want to keep those deer in frame the whole time perhaps an automatic Machine Learned tracking system would help 😊 something like this core-electronics.com.au/guides/Face-Tracking-Raspberry-Pi/

  • @DanielRisbjerg
    @DanielRisbjerg 2 роки тому +1

    Hey Core Electronics! Can I make it detect pistols only?

    • @Core-Electronics
      @Core-Electronics  2 роки тому +1

      Give Edge Impulse a look at. This library doesn't have that as an object but you can use Edge Impulse to train/modify the standard COCO library to include new objects and things.

  • @vrindas120
    @vrindas120 Рік тому +1

    Can I connect the detected images to Google lens url ..could you please help me with the code

    • @Core-Electronics
      @Core-Electronics  Рік тому

      Im sure you can. Come pop us a forum post here and we'll get the best people to help you - forum.core-electronics.com.au/

  • @suryanashirahulgupta8579
    @suryanashirahulgupta8579 2 роки тому +1

    how much time it will take after make -j $(nproc). Because on my side, after 3 hour my system reboot automatically. Help me out in this situation

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      3 Hours is definitely too long for installation! Come jump into the full written up article. At the bottom is a whole bunch of successful troubleshooting that you utilise.

    • @suryanashirahulgupta8579
      @suryanashirahulgupta8579 2 роки тому

      @@Core-Electronics Thanks for Reply. I did that successfully. Thanks for your help.
      One more thing, I want to connect multiple camera with Raspberry pi via GPIO. Is it possible? Can you help me in that?

  • @Xyezed
    @Xyezed 3 місяці тому

    Can you change the sd card to cloud storage??

  • @riddusarav5666
    @riddusarav5666 Рік тому

    Hello, great video but how do I get the coordinates of the tracked objects I am trying to build a robot that can identify and pick up objects, how would I find the coordinates

  • @corleone6272
    @corleone6272 9 місяців тому

    I want to get output when alghoritm recognize an animal. I want to send this output to firebase. What am i suppose to do ?

  • @trancongminh2628
    @trancongminh2628 7 днів тому

    is there any way make more video capture opencv speeed up?

  • @enzocienfuegos4733
    @enzocienfuegos4733 Рік тому

    Hi is there a way that then creates a log with all recognized animals/humans so data can be consumed ?

  • @cmodyssey
    @cmodyssey 2 місяці тому

    Would this work to detect pigeons out of the box or would it need training?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      I believe the model we used for this tutorial is capable of identifying birds, I'm unsure if this it is trained specifically to differentiate between pigeons and other birds though. A custom CV model better suited to bird identification would be more reliable.

  • @sonofsid1
    @sonofsid1 Рік тому

    I have a imx219 apparently it will not work with opencv. Is there a way to use gstreamer to make it work in open cv?

  • @mike0rr
    @mike0rr 2 роки тому

    For anyone it might help out later. I followed the commands on the guide verbatim and was having issues on the, "cmake -D CMAKE_BUILD_TYPE=RELEASE \" and 4 following commands that are all grouped together. I was using, "Right click highlighted text - copy, from the web page and CTRL+Shift+V paste to Terminal to input commands. And that worked great for most of the commands but doesn't appear to work for that last paragraph. I had to manually type it in myself in order for it to work correctly.
    Tim if you do read this, first of all thanks. But I am a tad lost on exactly when to change back the CONF_SWAPSIZE to 100. I assume after the installation is fully complete but to some of us noobs, its a bit unclear I guess. Also, don't know exactly why, but it says that, "sudo pip3 install numpy" already had its bits installed to Buster so it "might" be redundant. Unless its more of a full proof guild for other versions of the OS.
    Finally able to finish up my project! Once this finishes installing...
    :P

    • @Core-Electronics
      @Core-Electronics  2 роки тому +1

      Cheers for this write up mate 🙂 I'll legit jump into the guide and make it a little clearer when to swap back the CONF_SWAPSIZE. I'll make it more similar to what I have in my Face Recognition written up guide. My intention is to 'Noob proof' it as best as I can so everyone can have Open-Source Machine learned systems in the palm of their hands that they've created themselves.
      Very glad you now have it all up and running too!

    • @mike0rr
      @mike0rr 2 роки тому +1

      @@Core-Electronics I didn't think to check your guild on the other OpenCV videos. I'll go do that now. Finally have the next 2 days off so I can fully jump into it.
      I got past this issue, but now when I run the script its having issues no one else in the forums had. I assume this was due to some issues I may have made when trying to get the multi line command working. Idk so lost with all of this lol.
      I'm good with Arduino, but Raz Pi, Linux, console commands and scripts vs coding. So much to learn at once. You are such a huge help while lost and overwhelmed in this new little world.

  • @turnersheatingandplumbing
    @turnersheatingandplumbing 10 місяців тому

    Hi Great video! can I use a usb webcam instead of the pi cam, is it just a case of changing the code

  • @archieyoung3192
    @archieyoung3192 2 роки тому +1

    Hey tim! Here's a question, Is the model trained by your coco generated by the yolo algorithm? This is related to the writing of my graduation thesis. I will be more grateful if you can provide more suggestions.

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Sorry for getting to this so late. A lot can be learned here - cocodataset.org/ . Also there are a ton of research papers as people are unraveling this technology that are worth exploring (or adding to the bottom of a graduation thesis). Good luck mate!

    • @archieyoung3192
      @archieyoung3192 2 роки тому

      @@Core-Electronics thanks so much!I believe that with your help I can get a high score.best wish!

  • @JoseMoreno-hp2le
    @JoseMoreno-hp2le Рік тому +1

    Hi Tim, can the coral accelerator be integrated in this project?

  • @michaeltaylor-r2y
    @michaeltaylor-r2y Рік тому

    do you have any guides for using an ultra low light camera module such as Arducam B0333 camera module (Sony Starvis IMX462 sensor)

  • @Alex2Hsrw
    @Alex2Hsrw 2 місяці тому

    Great video !!

  • @shashankmetkar2820
    @shashankmetkar2820 2 роки тому +1

    code zip file is not available at the bottom of your article posted.Will you please upload it?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Code should be available at the bottom of article or in the comment section. If you can't see it pop me a reply and we'll figure out whats happening.

  • @xyliusdominicibayan6215
    @xyliusdominicibayan6215 2 роки тому +1

    Hey great video, May I know where to tinker if i will be using esp32 camera to stream the video? Thank you in advance!

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Hey mate cheers 🙂 the line to alter in code is | cap = cv2.VideoCapture(0) | changing that 0 to another index number that will represent your esp32 camera stream. Come make a forum post if you need any extra hand.

    • @xyliusdominicibayan6215
      @xyliusdominicibayan6215 2 роки тому

      @@Core-Electronics Hi would like to some extra hands on this one. How can I implement esp32 cam as my video stream for real time object detection using the code. Thankss!

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Definitely a great question for our Core Electronics Forum 😊

  • @chefseg
    @chefseg 2 роки тому +1

    Can this be performed on a Raspberry Pi 3B+?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Yes but there will be a slight delay in the video feed.

  • @timjx3675
    @timjx3675 2 місяці тому +1

    Awesome vid, clear fast and accurate 🌟

  • @maritesdespares4112
    @maritesdespares4112 2 роки тому +1

    hi is this usable for pest detection?

    • @Core-Electronics
      @Core-Electronics  2 роки тому

      Ah I see now, depends on the pest. If your interested in large pests like possums, rats, skunks, baboons or the like then this could be useful. Smaller critters like gross bugs likely not. Unless you had some doorway to the outside where you could watch the bugs come in from and you had a camera up really close.

  • @gameonly6489
    @gameonly6489 Рік тому +1

    Is the OpenCV for RPI can be use to RPI 4 with 2gb RAM?

    • @Core-Electronics
      @Core-Electronics  Рік тому

      Absolutely sorted mate, If your using a Raspberry Pi 4 then you'll be good to go 😊

  • @Max-cu6bw
    @Max-cu6bw 11 місяців тому

    Hello I am trying to create a design that will recognize different trash types. Does this image recognition able to perceive things like cardboard, paper, tissue, or silver foil as such? like trash items?

    • @Core-Electronics
      @Core-Electronics  11 місяців тому

      Hey Max, Im currently working on a very similar project. My workshop can get a bit messy so I am setting it up to scream at me when it gets untidy. I will report back to you how it goes, or if you've had some luck I'd be more than interested.
      Cheers!