YOLO Object and Animal Recognition on the Raspberry Pi 5 | Beginner Python Guide

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 73

  • @Core-Electronics
    @Core-Electronics  2 місяці тому +4

    Hey everyone! 2 things.
    First of all, we have instructions on the written guide of how to BOTH decrease the resolution and convert to NCNN to get greatly improved FPS (thank you very much Philipcodes from the forums).
    And talk about rough timing, Yolo11 launched the day after this video but it will work perfectly fine with this guide. In our guide, we have the line:
    model = YOLO("yolov8n.pt")
    You will just need to change it to:
    model = YOLO("yolo11n.pt")
    to start using Yolo11.model = YOLO("yolov8n.pt")

  • @MichaelSchultzSF
    @MichaelSchultzSF 2 місяці тому +3

    Love this! Just picked up a pi5 and a camera, going to start here for sure. Your vids are always so easy to follow and super helpful. Keep it up!

  • @prodcalls
    @prodcalls 2 місяці тому +1

    Amazing video. Thank you so much sir, you deserve more views!

  • @billycartdemons
    @billycartdemons 2 місяці тому +1

    great info & video - will definately use some of this

  • @kyuya5738
    @kyuya5738 2 місяці тому +5

    Thank you! this is the best beginner tutorial I've come across. Can you please do a video on implementing AI kit to boost fps as well?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      The AI Kit is still quite fresh software wise, right now they have a fantastic set of instructions on getting it, but its not running out of a thonny script like this:
      www.raspberrypi.com/documentation/accessories/ai-kit.html#getting-started

  • @tunglee4349
    @tunglee4349 2 місяці тому

    This is a very helpful tutorial!!!! Nice work ❤

  • @hehehehagrrrr1319
    @hehehehagrrrr1319 2 місяці тому +4

    How can I train with my own dataset?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      Training with your own data is a little bit more involved. Ultralytics has some great documentation on it, but be warned you will need some decent hardware. On a 4080 it ussually takes 2 or so hours, no GPU may take days or a week, and on a raspberry Pi it may take months.
      docs.ultralytics.com/yolov5/tutorials/train_custom_data/#23-organize-directories

  • @weihong8337
    @weihong8337 Місяць тому +1

    thanks you!!! I made it. I use VNC not hdmi. FPS: 1.7

    • @weihong8337
      @weihong8337 26 днів тому +1

      I try the ncnn, and the new FPS: 6

    • @puneethff4927
      @puneethff4927 25 днів тому

      @@weihong8337 brother means ? wt is ncnn ? how to use it ?

  • @viniifsc
    @viniifsc 2 місяці тому +2

    Nice vid, can you make a tutorial working with the AI Kit or the Coral Edge TPU? I'm interested to see the perfomance gain on those

    • @Core-Electronics
      @Core-Electronics  2 місяці тому +1

      Its not a simple task to run this code on a dedicated AI chip, for the AI Kit you need to jump through a few hoops to convert the model to the specific format it needs. The AI Kit library does come with YoloV8n ready to go and we have seen reports of people getting FPS in the 50-60 range which is incredible! Right now it is a little difficult to actually use the AI Kit in a project (it feels a little more like a tech demo), but software support for it is developing rapidly so that shouldn't be a problem for too long. When the software support is mature enough you will definitely find a video here!

  • @germancruzram
    @germancruzram 22 дні тому +1

    Do you know of any alternative to connect the raspberry camera via USB instead of the flex cable (very short)?

  • @mauchmaxamadeus
    @mauchmaxamadeus 2 місяці тому +2

    Can it also recognize small flying animals such as wasps, flies or even mosquitoes?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      I think you may have a hard time with that, they may be too small to be seen by the camera, and they may be too fast and blurry! On top of this I don't think the model will be able to identify them sorry.

  • @honchinleng9283
    @honchinleng9283 Місяць тому

    I saw your videos using OpenCV and now with Yolo. Which one should I start as a beginner? Appreciate your super advice.

    • @Core-Electronics
      @Core-Electronics  Місяць тому

      These projects have progressed a lot since we made the old video, this one is easier, quicker to get going, and runs more than 10x faster! This one actually also uses OpenCV as well!

  • @armanddewet9700
    @armanddewet9700 24 дні тому

    Are you able to use any USB camera for this type of integration?

    • @Core-Electronics
      @Core-Electronics  22 дні тому

      We have some code in the written guide that uses a webcam instead. There can be some issues with the colour profile used by the camera, and we talk a little about it in there.
      core-electronics.com.au/guides/raspberry-pi/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/#appendix-using-a-webcam

  • @rbbala3589
    @rbbala3589 2 місяці тому

    Nice . From india

  • @odko1137
    @odko1137 Місяць тому

    Hello thanks for the video. I have a some questions Is it possible to if animal detected then i should spin the motor, but I don’t know how to do it

    • @Core-Electronics
      @Core-Electronics  29 днів тому

      You would need to first get YOLO to detect the animal first. Here is a list of all the things that are in the COCO library that can be detected:
      tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
      Then you would need to connect up a motor driver and motor. We have a guide on how to do that here to get you started!
      ua-cam.com/video/ea6tSppgZlY/v-deo.htmlfeature=shared
      And if you need a hand with it we have a maker community forum where lots of makers can help out with your project!
      forum.core-electronics.com.au/

  • @mehulkini9384
    @mehulkini9384 Місяць тому

    @Core-Electronics I needed your help. So basically I am using it for my Quadcopter so I wanted to use YOLO v5 on my pi5 so can you tell me which camera would be good and the objects I have to find is that the plastic and the styrofoam. How do I train my YOLOv5 to Do that ?

    • @Core-Electronics
      @Core-Electronics  Місяць тому

      Really any camera will do, the Pi camera module v2 and v3 might be a good pick (you can also use a webcam and we have some code in the written guide linked below the video). Your issue would be in getting the model to detect Styrofoam and plastic. Training a model is quite involved and without a GPU can take several days.
      There are some pre-trained models that you can find here that might fit your needs, but if not you may need to deep-dive into training your own model, which we unfortunately don't cover 😭.
      huggingface.co/models?other=yolo

  • @karzokalori89
    @karzokalori89 Місяць тому

    Mate, really great, educational, and interesting video. Could you show this with the new AI HAT+ 26 TOPS from Raspberry Pi? It would be very interesting to learn how to extract the output in the form of a CSV file or something similar, to use the information to find out how many people pass by the camera and at what time, or how many cyclists, etc. Maybe even make graphs from this?

    • @Core-Electronics
      @Core-Electronics  Місяць тому +1

      We definitely have some AI HAT videos in the pipeline (but the setup and usage is very different), I don't know about data logging thought. Large langauge models like ChatGPT and Claude would be more than capable of helping you write the code your looking for though!

  • @Username-dr6ru
    @Username-dr6ru 2 місяці тому

    Can you use yolo world to control hardware as well, or does that only work with the base models?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      The hardware control script can definitely be modified to use yolo world. You should only need to change the line where we choose the model to use, and add in the line where we prompt it what to look for!

  • @GenreFluid
    @GenreFluid 2 місяці тому

    Can you use this for Wildlife live streaming?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      You most definitely could! The troubles may be in supplying power to it, and getting it an internet connection to send data back. You would also need to experiment to see which types of wildlife it will pick up. It may recognise everything 4-legged as a dog!

  • @sergeivoronov5161
    @sergeivoronov5161 2 місяці тому

    Thanks for the video. What’s the approximate max distance in which the detection will work? Or which size on screen should be object we want to detect and does this parameters affected by video resolution and model size?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому +1

      A lower resolution will lower the distance it can detect, and a smaller model will also lower the distance. We found that the medium model, when converted to NCNN (so at the standard 640x640) could recognise a cup at about 8-10 meters away.

  • @ryandx5973
    @ryandx5973 7 днів тому

    Good morning, Im doing a technician Degree and I have to do a big project now it is similiar to a bachlor degree. I choose the topic AI Object detection with Raspberry Pi. Am I allowed to use your script? I will put the source from the guide in there.
    And also, is there a way that i can modify the script so I can optimize it a bit? in what topic would i need to do research?
    Thank you in advance!

    • @ryandx5973
      @ryandx5973 7 днів тому

      I have a cat :D! My idea is to put the camera infront of our garden door. When the object cat is detected for 5-10 secounds. It sends a email to me so I know if im in the living room that I have to open the door.

    • @Core-Electronics
      @Core-Electronics  6 днів тому

      Very nice project! We actually derived a lot of this code from the Ultralytics website itself, there is a lot of information over there that might help: www.ultralytics.com/
      But you are more than welcome to use our code or go back and use theirs directly. What are you looking to do optimisation-wise? In the written guide we have some better tips on improving FPS, but if you want to optimise anything else about the code, large language models like chatGPT can help greatly. It may also be a good source in learning about what you need to learn.
      We also have a maker forum where we have lots of people who help out with this sort of stuff, so if you need hand feel free to post over there: forum.core-electronics.com.au/
      Good luck with your project!

  • @joshuamiguelroa2962
    @joshuamiguelroa2962 2 місяці тому

    Can i use the raspberry pi 4b and raspberry pi camera?

    • @joshuamiguelroa2962
      @joshuamiguelroa2962 2 місяці тому

      Im working on a project that works with iot and connected to esp32

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      We haven't tested it, but it will most likely work on a Pi 4, Ultraytics says it has support. Just be prepared as it may be very slow, the Pi 5 is about 2-3x faster than the Pi 4.

  • @feather_jp8
    @feather_jp8 Місяць тому

    I’m trying to automate something based off object recognition and I was wondering if you might be able to help me out, specifically I want it to play a noise whenever it detects certain objects, for example when it see a person, it would play a .wav file that correlates

    • @Core-Electronics
      @Core-Electronics  Місяць тому +1

      You can easily achieve this with the Pygame library, we don't have a specific tutorial on this but you can find a million others online demonstrating how to use it. The important lines should be something along the lines of:
      import pygame
      pygame.mixer.init()
      pygame.mixer.music.load("myFile.wav")
      pygame.mixer.music.play()
      You'll just need to whack the .wav file in the same folder as the object detection script.
      If you get stuck or need a hand though, feel free to chuck a post on our community forums!

  • @mohammedshoaib2752
    @mohammedshoaib2752 2 місяці тому

    Can we implement this in rpi4b 4gb ram? (using external camera)

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      We haven't tested it, but it will most likely work on a Pi 4. Just be prepared as it may be very slow, the Pi 5 is about 2-3x faster than the Pi 4.

  • @pamus6242
    @pamus6242 2 місяці тому

    I cant believe how simple, uncomplicated and pragmatic this video is.....however is there anyway to have a pass-through to outsource that compute to an x86 system or couple of rpi 5 clusters ?
    Also this thing could run full frame rate on that odroid with that Rockchip monster with 16GB Ram.
    Will give it a try, but need to get me a RPi 5 first, have an RPi 4 already.

    • @Core-Electronics
      @Core-Electronics  2 місяці тому +1

      The Ultralytics implementation of YOLO is very cross platform, so if you can get it set up on an x86 system, you should be able to use nearly the same python code we cover here! In terms of the Odroid, It may come down to an issue of optimisation, even when we convert it to NCNN it still doesn't fully utilise all of the Pi's hardware, would need to test though. And RAM isn't a big factor here, the biggest model is barely using 2GB of RAM which is incredible!
      Best of luck when you can give this a go!

    • @pamus6242
      @pamus6242 2 місяці тому

      @@Core-Electronics Wow! ok
      Chris from explaining computers did a video yesterday on a new Radxa with an Intel N100 chip and similar build to a RPi5. This x86 thing could do it, just guessing.
      I have a tiny Thinkcenter lying around with an i7 6700....Now all I need is to be able to connect the camera to the PCie interface or search for some usb module that can connect to the cam....may need to research more....

  • @RohanKumar-lm8ko
    @RohanKumar-lm8ko 29 днів тому

    Can you help me in this:
    pip install ultralytics[export]
    These packages do not match the Hashes from the requirement file.

    • @Core-Electronics
      @Core-Electronics  27 днів тому

      I previously had this issue and it was caused by not running the first set of commands properly:
      sudo apt update
      sudo apt install python3-pip -y
      pip install -U pip
      If that doesn't work, a fresh installation of Bookworm OS might help. If all that fails feel free to post on our community forum topic for this video, we have lots of makers over there that can help!
      forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923

  • @murraystaff568
    @murraystaff568 2 місяці тому

    Nice video! I just bought an AI kit from you guys (today!) hoping this will boost fps significantly?

    • @Core-Electronics
      @Core-Electronics  2 місяці тому +1

      There are a few steps between running the models that come with the AI kit, and getting YOLO to run on it.
      (But we may be working on an AI HAT guide as we speak 😏)

  • @Akashplays-v2i
    @Akashplays-v2i 11 днів тому

    Sir can tell me raberry pi 4B setup from basics please sir😢

  • @Lp-ze1tg
    @Lp-ze1tg Місяць тому

    How slow will it be on pi 4?

  • @Thebackbencher17
    @Thebackbencher17 2 місяці тому

    Can we do it on rpi4

    • @Core-Electronics
      @Core-Electronics  2 місяці тому

      We didn't test it on an RPi4, but it should work pretty much the same, Ultralytics says that it is supported. Just be ready for it to run about 2x slower :(

    • @Arctics04
      @Arctics04 11 днів тому

      yes but can't get above 2 fps. It's rather 1 fps

  • @sams9089
    @sams9089 Місяць тому

    The ncnn portion of the code doesn’t work for me! I get an error “ModuleNotFoundError: No module named ‘ncnn’ “. I have the exact lines of code running and the main code works as well so i’m unsure how to fix this

    • @Core-Electronics
      @Core-Electronics  Місяць тому +1

      Is this when running the conversion script or trying to run the object detection code after converting it? Make sure that your script is saved and is in the same folder as all your other code and models. If this still doesn't work, feel free to chuck a post on our community forum topic for this video, we have lots of makers over there that can help.
      forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923
      We are also in the process of updating the NCNN conversion section as we have found a better way so that should be up sometime today if you want to give it a try!

    • @sams9089
      @sams9089 Місяць тому

      @@Core-Electronics This is when running the conversion script. It tries to run update but spits out: AutoUpdate skipped (Offline)
      I’ll post on the forum but thanks!

  • @phafoubest8268
    @phafoubest8268 Місяць тому

    I keep getting error dependency in installing the Ultralytics[export]. Any has encountered this before and how it can be fixed?

    • @Core-Electronics
      @Core-Electronics  Місяць тому

      Have you tried running the line multiple times? It installs quite a lot with that line and you may need to run it a few times to let it do its thing. If that doesn't fix it, feel free to post your issue on our dedicated community forum topic for this video. Try and include some information about the specific dependency issue. We have a lot of makers over there that are happy to help!
      forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923/6

  • @rbbala3589
    @rbbala3589 2 місяці тому

    Can you create a that take things using object detection pls😁

  • @HarshitGautam-bj3lc
    @HarshitGautam-bj3lc 2 місяці тому

    Hey, Loved your content i am an Intern at ISRO(India Space Research Organisation) and i am working on deploying a yolov8 model on raspberry pi can you help me deploy that with raspberry pi ai kit and improve the model for real time inference.
    What format would be best to deploy as i have seen few videos that says convert the model into onxx then convert it into Hailo hf format using the hailo dataflow compiler or model zoo then copying and then running the code am i going right?? your help is highly appreciated.

    • @Core-Electronics
      @Core-Electronics  2 місяці тому +1

      That sounds exactly right! The AI Kit only works with the Hailo .HEF model format, and the easiest way is to first convert it to ONXX, then HEF. Just be aware that when you convert it to ONXX you will often "bake in" a lot of configuration. When its in Pytorch format, we can change the resolution, and for things like YOLO world we can change the prompts for it to look for, but when we convert it to ONXX it locks these in and we cant change it. So get the settings right, convert to ONXX, then to HEF and run on the hat.
      The usage is different than our script here though, we are using a nice library which lets us run it with high level Python code and its not as easy yet to do this with the kit.
      Best of luck mate!

    • @Core-Electronics
      @Core-Electronics  2 місяці тому +1

      Another thing! If you run into issues with the AI Kit check out the AI camera that just launched - it uses the Sony imx-500. We have had a lot more ease in using it and writing custom scripts with it. It may not be as powerful, but it still runs well.

    • @HarshitGautam-bj3lc
      @HarshitGautam-bj3lc 2 місяці тому

      @@Core-Electronics Thanks a lot.

  • @kavingnanamurali4097
    @kavingnanamurali4097 6 днів тому

    Anyone know why my colour saturation is off like people are appearing blue

    • @Core-Electronics
      @Core-Electronics  3 дні тому

      That sounds like you have a colour space issue. We had these issues when we were using a USB webcam as the red and blue channels were being swapped (and your mostly red-ish face becomes mostly blue-ish ahahaha). At the end of the written guide we have a script for usb webcams that fixes the colour space. Have a dig around with it as its likely to fix your issue!

  • @tumultuouscornucopia
    @tumultuouscornucopia 29 днів тому

    Half of this is missing. (1) You don't say you need a sudo apt upgrade after the sudo apt update. (2) As far as I can tell, the ultralytics install does not install PyTorch so that is another step. (3) There seems to be a load of settings needed to make the camera work - although these may be out of date, I can't tell because I cannot make the install work. Given that you show setup from a new set of components all that stuff is necessary. All I get running your tutorial is a load of errors about torch>=1.7.0 (no - re-running does not magically fix the issue).

    • @Core-Electronics
      @Core-Electronics  29 днів тому +1

      Sorry to hear you are having issues. This installation process was taken directly from Ultralytics who have made most of the modern YOLO models. Running apt upgrade won't hurt but it's not entirely needed here as we are mainly focused on ensuring that Python and pip are up to date.
      You may have encountered an issue in your installation process as it will most definitely install Pytorch. That or you may have an issue with your virtual environments.
      The camera settings can vary depending on the Pi and could be many things. Feel free to post your issue on our community forum post for this guide with a little bit of information about your setup and where the issue is, we have lots of makers over there that can help!

  • @Liam-xz7xu
    @Liam-xz7xu 3 місяці тому +1

    First

  • @nikpatel2605
    @nikpatel2605 2 місяці тому

    If anyone has got any ideas on how to use this for a night vision camera that will turn lights on when a fox is detected please let me know