Want to learn how to train your own TFLite model to run on the Raspberry Pi? I released a video giving step-by-step instructions for training TFLite object detection models inside your web browser using Google Colab and deploying it on the Pi. Check it out here! ua-cam.com/video/XZ7FYAMCc4M/v-deo.html
Can someone help me? I have a problem with the following command step "sudo pip3 install virtualenv", when I execute this command the following error "externally-managed-environment" appears, I performed all the previous steps but I was unable to resolve it
I find it absurd, but also a complete testament to what you have done here, that I was able to get this working in about 15 minutes on the first try. Thank you!!!!
Hey all! If you're using the Raspberry Pi OS Bullseye release (which is the latest version), there's a couple things you have to do to get it working with the Raspberry Pi Camera: 1. Make sure the OS is up-to-date by issuing "sudo apt update" and "sudo apt install" and then rebooting the Pi 2. Open a terminal, enter "sudo raspi-config", go to the "Interface Options" menu, then go to the "Legacy Camera" option and enable it. Then, reboot the Pi (again). 3. Run the TFLite_detection_webcam.py script as described in this video. Note: You only need to do these steps if you're using a Raspberry Pi Camera (HQ, v1, or v2). You don't need to do them if you're using a USB webcam. Also, you don't need to do them if you're using the Stretch or Buster OS releases.
Great video. For those looking to do this and get a higher FPS rate try using the pi camera connection instead of USB. The actual connection on the board itself will use less power and will have lower latency plus it goes directly to the GPU which is what you want for object detection. I haven’t tested this with TF Lite but the results are dramatic when running OpenCV
Amazing thing done on the Raspberry Pi, Sir. All this while I thought Tensorflow would never work properly on the Pi. But this video helped a lot, Sir. Please keep geeking Sir. :)
So excited. I've been looking for a light weight model to put onto a pi in a RC car - this guide was straight forward, you've put a lot of hardwork in getting everything done, and to see it in action is amazing. Looking for that next video about what will speed up the FPS! Thanks man!
Me and my team tried using a diffrent software and a pi 3 for object detection and it was hell. we only got results every 8 seconds and this was on a moving drone ship so by the time it detected what it had to it was already miles away lol. The detection speed in this is amazing.
@@BinkiklouGaminglol well we had its 6 motors and sensors (mainly a bunch of MZ80s)running on a arduino mega and we had a pi3 with a pi camera on top) The physical dimensions are If I remmember correctly (it was some time ago so probably these might be off) İt was round 50 ish cm (how long it was) 30-40cm in height and again 30-40 cm in with. Why did you ask ? :D
@@BinkiklouGaminglol An autonomous ship. In this case, we built it for a competition and the goal was that our "bigger" ship would be placed in a pool in which there were other "smaller ships" the smaller ships were red and green and you had to somehow capture the green ones and take them to a different part of the pool. I don't know if they have any English resources but you can search "Fetih1453 TeknoFest" that's the name of the competition. It would make more sense if you just looked at that :D
Fantastic guide - clear, well sized steps, i love that install script, well documented, use cases! Thx! Btw, i like how to model at the end of the video is sure (more or less) that your guitar is a person or a backpack! :D
Hi @EdjeElectronics ! I have followed your tutorials for a project of mine. I have encountered some errors. Can you help me. I have followed you on twitter.
Is this something that would benefit being on a cluster? One Pi for the camera, one Pi for the processing? I don't know anything about tensor flow or Pi clusters, just curious.
Reading in a frame from a USB camera vs reading it in from another Pi isn't really a difference in performance. But other processing steps after the detection might be heavy enough to benefit from multiple Raspberries.
Good question! No, I don't think a cluster would help for this. The main chunk of processing occurs when passing the image through the neural network to find the detected objects, and there isn't any (easy) way to split that between multiple Pis. And couka is correct that using a separate Pi to handle the camera wouldn't really help. I already have the camera running in a separate thread to speed things up (see www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/ )
I followed the recommendation, below in the comments, to install tensorflow 1.14 after running the requirements script. Everything works and my Pi4 4GB is giving about 5fps with the google sample.
This was my first click researching a project and I live on one of the cross streets shown in the beginning of the video. So random! Helpful video too.
@Edje Electronics I just want to say a big thankyou for your work of putting this tutorial out there. I have designed and constructed a Autonomous Mobile Robot which is 95% 3d printed that uses tflite to identify and exterminate weeds. I couldn't have done it without your help! If I'm ever in your neck of the woods. I would like to thankyou in person. Hello from a final year mechatronics student in Port Elizabeth, South Africa!
Hello Mr. Radnartjie, Trust you are well. Hey, I was wondering how you ran the object detection headless. Did you run this program on an IDE like Thonny / Geany? I'm trying also to build an Autonomous Mobile Robot that uses object detection but I can't seem to find how to run this program other than on the terminal... Mr. Radnartjie, I would be really grateful for some advice.
How was your setup right at the beginning of the video in the car? How do you recorded the screen? what type of connection do you used to connect to the pi? thanks for the cool tutorial!
I had my Pi plugged into a monitor and recorded the screen using this HDMI recorder: www.amazon.com/gp/product/B00KMTYPXC . Looks like it's no longer available on Amazon, but you should be able to find something similar!
I have a similar project, Pi will automatically track down the object e.g. Raccoon or human for my project(you can train your own model use OpenCV), and "fire" laser on the target and sound the alarm. My project is based on this: www.pyimagesearch.com/2019/04/01/pan-tilt-face-tracking-with-a-raspberry-pi-and-opencv/
This worked brilliantly. My pi 4 is setup to work with the Sunfounder Picar-x and was a little doubtful if your project would play along with their setup. Luckily, it worked seamlessly on the first attempt using your setup scripts and the default models. My picam is doing 20-24 FPS and I’m just amazed. My end goal is to have this Picar-x to roam around the house without colliding into anything and to annoy my cat to do some exercise (she is on the bulkier side)
Thanks, I'm glad to hear it works well! Do you know what version of Raspberry Pi OS you were using? I'm working on updating some of the scripts to work without errors on the latest Raspberry Pi OS.
Can we use this to make smart traffic light differentiating between a normal vehicle and an emergency vehicle such as an ambulance? Can you make a video to demonstrate or help me out through any link. I will be obliged.
Yes you can, that would be a cool project! I don't have time to help, but check out my Pet Detector video, that might give you some ideas for how to control a program based on what is detected. ua-cam.com/video/gGqVNuYol6o/v-deo.html
Thank you! I really appreciate your efforts in clearing up how to get this working. So far things are working great after your set up instructions. I will be trying to set up some custom objects to detect and passing the locations via I2C to an Arduino. I'm looking forward to trying it with the USB Coral unit soon.
Gregory Mazza hey Gregory, curious to know what kind of objects you are trying to detect. I’m working on my own algorithms and was wondering if you’d like to share information, thanks. My email is jatinderm19@gmail.com.
I did this 2 years ago and it was an nightmare. It was still fairly new and you had to find patches for the patches. You made this ridiculously simple.
Can I ask that can we train our own model on Tensorflow Lite ? As I have followed your previous tutorial for training my own model on Pi 3. It was good but in slow speed
Here's my GitHub guide showing how to train your own TensorFlow Lite detection model! github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
Hi Edje I have a problem about line 122 Traceback (most recent call last): File "TFLite_detection_webcam.py", line 122, in with open(PATH_TO_LABELS, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflitel/Sample_TFLite_model/labelmap.txt'
Had the same problem, I just created the /home/pi/tflite1/Sample_TFLite_model/ folder and moved the labelmap.txt and detect.tflite from the tflite1 folder into it!
Haha yep!! I'm from Great Falls originally but living in Bozeman now. It's a great place to live! Check out my Raspberry Pi 3 vs Raspberry Pi 4 video, it's mostly footage of me driving around Bozeman :) ua-cam.com/video/TiOKvOrYNII/v-deo.html
Once I formatted my NOOBS and started fresh your tutorial worked perfectly. Honestly, I started here, I'm going to go back and do step 1 now. The documentation is excellent. You've given a lot to learn and it's walked through for the non-pro like myself. Excellent work
This was perfect and works fabulously! Far better than the official Google coral documentation which I haven't been able to get working yet. When you have time...a video on how to access GPIO pins and activate them or to activate another program based on a detected class would be super helpful. I'm having trouble figuring out how to turn the results of a detection into concrete effects (if bird detected, take a photo and if squirrel detected turn a gpio high and take a video to record the fun). Thanks for all the hard work you put into these videos!
Thanks, I'm glad the videos are helpful! I'm hoping to put out a video soon that will give an example of toggling GPIO when certain objects are detected. Really hoping to get started on it this weekend! I also want to do a video showing how to trigger video/audio recording using ffmpeg.
@@EdjeElectronics In case you haven't seen it, Pyimagesearch has a nifty KeyClipWriter that looks like it might be a good way to record the video, not just of the action frames but storing the frames in a buffer and saving the entire event to video including the frames prior to and immediately after the event is detected. That blog post is " Saving key event video clips with OpenCV."
great instructions! I use the pi4 on 64bit mode, idk if that is related or not, but, I did have a issue with the version of opencv not being installed, this was resolved by : pip install --upgrade pip pip install opencv-python just posting this if anyone else gets that this should do the trick for no matching distribution
Hello Evan! thank you very much for your tutorial, it was a great pleaser to learn from you. Hope you will do more projects like that! I successfully repeated your project with my custom model for one month ago (I got my model from google cloud). Yesterday I built another model with different dataset and got some trouble with implementation. The error says next: ValueError: Op builtin_code out of range: 130. Are you using old TFLite binary with newer model? I found out they updated their conversion with TensorFlow 2.5 runtime. I guess this is the problem, may be you know how to fix it?
@@GenadiJai Thanks, I'm glad the tutorial has been helpful! Hmm, if you updated tflite-runtime and you're still getting that error, then I'm not sure what the problem is. Can you check the version of tflite-runtime you're using on the Pi and the version of TensorFlow that you used for building your model? You should be able to use this to check the tflite-runtime version: import tflite_runtime tflite_runtime.__version__
@@EdjeElectronics thank you very much for your response. The version of tflite_runtime on raspberry pi is 2.5.0 and Google cloud uses TensorFlow 2.5.x (latest patch) cloud.google.com/ai-platform/training/docs/runtime-version-list package list
For those having the following error: (tflite1-env) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=Sample_TfLite_model Traceback (most recent call last): File "TFLite_detection_webcam.py", line 122, in with open(PATH_TO_LABELS, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflite1/Sample_TfLite_model/labelmap.txt' Remember that the model files have been unzipped in Sample_TFLite_model and not Sample_TfLite_model or Sample_Tflite_model for that matter. Just make sure that you type *TFLite* correctly, and you're good to go.
hey, thanks for the video it really helped me a lot. but i have a question , how can i detect from any website like from url of youtube . please help me i have to complete my project and i am confused .......... and again thanks for the video.
Thanks for the video, however having many troubles installing get_pi_requirements.sh. getting unable to locate, [Errno -3] Temporary failure in name resolution')':
I recently updated some of the setup scripts to work with newer versions of Raspberry Pi OS. (With Raspberry Pi and TensorFlow always releasing new versions of software, it's hard to stay on top of it all.) Everything should still work when following the instructions in this video. Please let me know if you run into any errors!
hi i have arasberry pi 4 b 64 bit os and im getting this error at the very end when trying to run it. I am using a high quality pi camera [ WARN:0] VIDEOIO(V4L2:/dev/video0): can't open camera by index Traceback (most recent call last): File "/home/pi/tflite1/TFLite_detection_webcam.py", line 171, in frame = frame1.copy() AttributeError: 'NoneType' object has no attribute 'copy'
i run it at virtualbox with raspberry OS Desktop 32Bit. the tensorflow cannot installed. it says caould not find a version that satisfied the requirements tensorflow (From version: )
Amazing vid! I feel like this is the start of an amazing channel. Couple of questions : I have a rpi 4 as well with rpi cam. I wanted to setup the rpi as a basic IP cam for streaming only, no recording but the fps is extremely low (15fps max) . The idea was to see how high it could go. So I guess I'm asking how high it could be and also in the last seconds of this video did you achieve 20fps with the coral connected? Finally could it be trained to identify people? Thanks. I'm now wondering about setting up tensor flow 24/7 on the house server to monitor the babies 🤣 maybe make a video on that ❤️
Yeah, it should work there. Raspbian and Ubuntu are both based on Debian after all. And, I'd be surprised if your PC doesn't hold up to a Raspberry Pi. All the steps should be the same.
I think that part of the problem is that there are new versions of the programs that are being downloaded in the .sh that haven't been updated and so aren't working/downloading correctly. But I can't figure out which ones they are to get the updated ones.
Hello, please watch my Pet Detector video. It explains how the variables work and gives an example of how to trigger actions if certain objects are detected. Good luck! ua-cam.com/video/gGqVNuYol6o/v-deo.html
Hi, the tutorial is relly great, but is there an option to access the raspberry gpio`s? Can somebody help me please. I am under a little time pressure.
Ok i found a solution. Activate the virtual enviroment => cd tflite1/ source tflite1-env/bin/activate pip list #shows all installed packages pip install rpi.gpio
Thanks! Teachable Machine creates an "image classification" model rather than an "object detection" model. This video only works for object detection models. You can look at this GitHub page to see how to set up an image classification model on the Pi! github.com/tensorflow/examples/tree/master/lite/examples/image_classification/raspberry_pi
@@EdjeElectronics actually i'm trying to run model for object detection i have trained there, Teachable Machine. I took model.unquanted.tflite, model.tflite and label.txt. And now i'm not getting to run my model in my Android device. I put the three files in assets folder but when i running the app nothing happen. After the android app works fine, i want to run in my raspberry pi.
thanks a lot for this video but i just face some problem with this python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model Traceback (most recent call last): File "TFLite_detection_webcam.py", line 19, in import cv2 File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/cv2/__init__.py", line 3, in from .cv2 import * ImportError: libjasper.so.1: cannot open shared object file: No such file or directory
I solved the problem by downloading this version of the model instead : wget storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip and unzip: unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip -d Sample_TFLite_model
Hi, Edje thanks for the tutorial, the object detection works or certainly looks perfectly fine to me but after I run it, at first it says : ' HadoopFileSystem load error: libhfds.so: cannot open shared object file: No such file or directory ' Could you please help me solve this issue :)
I created a Google Colab notebook for making your own TensorFlow Lite model with custom data! You can train, convert, and export a TFLite SSD-MobileNet model (or EfficientDet), and then download it to your Raspberry Pi and use as shown in this video. I'm still working on the video that walks through the Colab notebook, but please try it out if you're interested! colab.research.google.com/github/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Train_TFLite2_Object_Detction_Model.ipynb
You're very welcome! Were you successfully able to train a model with the Colab notebook? It hasn't been tested by many other users yet, so I'm curious to hear if you ran in to any errors or issues.
@@EdjeElectronics Well I wanted to train a clothes classifier using FASHION-MNIST, so I'm still in the process of figuring out how to change that dataset to fit the colab notebook. In short, not succeeded yet, but haven't had the time to properly test it, so fingers crossed!
@@casualjay7428 Oh! Actually, my guide won't work for that 🙁. My guide is for "object detection" models, while the FASHION-MNIST dataset is used to train "image classification" models. Here's a good guide from TensorFlow on training a basic classifier on the FASHION-MNIST dataset. www.tensorflow.org/tutorials/keras/classification
I am new on this and perhaps this is a silly question: I am running a headless rpi connecting via ssh, I've done everything on this tutorial except the last part where I've to execute the python code. But when I run it "python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model" I got this message: ": cannot connect to X server" anyone has faced the same issue? is it correct run the python code over ssh? if not, do I need the raspberry desktop version instead? Thanks in advance!
Unfortunately, it doesn't work with a headless RPi connected over SSH. The "X server" error message occurs because it's trying to display an image to the screen, but there is no screen. You'll have to either use a desktop version, or modify the code so it just saves image files instead of trying to display them. Nice cat picture btw 😺
@@EdjeElectronics Many many thanks mate, now I get it, I also did some research in blogs and they pointed out to the same. About my profile pic, long live cat lovers 🐈 haha 👍🏻 Cheers!
Want to learn how to train your own TFLite model to run on the Raspberry Pi? I released a video giving step-by-step instructions for training TFLite object detection models inside your web browser using Google Colab and deploying it on the Pi. Check it out here!
ua-cam.com/video/XZ7FYAMCc4M/v-deo.html
FIRST!!!
Edje Electronics is it better to use the 8G or 4G raspberry pi
Can someone help me? I have a problem with the following command step "sudo pip3 install virtualenv", when I execute this command the following error "externally-managed-environment" appears, I performed all the previous steps but I was unable to resolve it
BroHam !!!!! this is what I was looking for, something simple to catapult my curiosity to see if I like it !!! Excellent work my friend.
I find it absurd, but also a complete testament to what you have done here, that I was able to get this working in about 15 minutes on the first try. Thank you!!!!
9:49 nice acoustic person/backpack you've got there xP
Hey all! If you're using the Raspberry Pi OS Bullseye release (which is the latest version), there's a couple things you have to do to get it working with the Raspberry Pi Camera:
1. Make sure the OS is up-to-date by issuing "sudo apt update" and "sudo apt install" and then rebooting the Pi
2. Open a terminal, enter "sudo raspi-config", go to the "Interface Options" menu, then go to the "Legacy Camera" option and enable it. Then, reboot the Pi (again).
3. Run the TFLite_detection_webcam.py script as described in this video.
Note: You only need to do these steps if you're using a Raspberry Pi Camera (HQ, v1, or v2). You don't need to do them if you're using a USB webcam. Also, you don't need to do them if you're using the Stretch or Buster OS releases.
I want to glow led when car detected what will be the changes ?
hey so i wanted to detect only a sertain ojbject iinstead of all kinds.. how can i do that?
Thank you so much for creating, uploading, and updating this program. It’s brilliant!
Can you show how to setup and run in vscode or pycharm?
Great video. For those looking to do this and get a higher FPS rate try using the pi camera connection instead of USB. The actual connection on the board itself will use less power and will have lower latency plus it goes directly to the GPU which is what you want for object detection. I haven’t tested this with TF Lite but the results are dramatic when running OpenCV
Dude! It worked!!! Thanks so much. I tried one of your older videos but had no luck so I'm pumped to have something that finally runs!
This is super. very methodical and complete video. worked perfectly.
No joke, I actually love you, I've been looking everywhere for a video like this!
Amazing thing done on the Raspberry Pi, Sir. All this while I thought Tensorflow would never work properly on the Pi. But this video helped a lot, Sir. Please keep geeking Sir. :)
Thank you so much, I used your older guide for Tensorflow with SSDLite before, and now you release this. Thank you!
Oh Man, that's a really great video!
I definitively have to try this !
Thanks for the great work.
So excited. I've been looking for a light weight model to put onto a pi in a RC car - this guide was straight forward, you've put a lot of hardwork in getting everything done, and to see it in action is amazing. Looking for that next video about what will speed up the FPS! Thanks man!
Can you please tell me why my camera window is not showing? for webcam
Thank you so much for this guide, i was strunggling a lot with the object detection application until i found your guide :)
Great video! Definitely subscribing for more. I already have the coral device's so I can't wait to see what you do with them.
Your tutorials are good for beginners, please keep doing them :)
Dude! This is cool! I didnt even know that they had this type of technology.
You are the great man. I'm computer science teacher from Thailand.
Thank you!! I hope this video can help your students 😃
Me and my team tried using a diffrent software and a pi 3 for object detection and it was hell. we only got results every 8 seconds and this was on a moving drone ship so by the time it detected what it had to it was already miles away lol. The detection speed in this is amazing.
How big was the drone ship
@@BinkiklouGaminglol well we had its 6 motors and sensors (mainly a bunch of MZ80s)running on a arduino mega and we had a pi3 with a pi camera on top) The physical dimensions are If I remmember correctly (it was some time ago so probably these might be off) İt was round 50 ish cm (how long it was) 30-40cm in height and again 30-40 cm in with. Why did you ask ? :D
@@barsgecgil3437 wait what's a drone ship
@@BinkiklouGaminglol An autonomous ship. In this case, we built it for a competition and the goal was that our "bigger" ship would be placed in a pool in which there were other "smaller ships" the smaller ships were red and green and you had to somehow capture the green ones and take them to a different part of the pool. I don't know if they have any English resources but you can search "Fetih1453 TeknoFest" that's the name of the competition. It would make more sense if you just looked at that :D
@@barsgecgil3437 Oh nice, this is kinda like FRC robots but on water, and the participants are a little bit older.
Incredibly simple and verry well explained! This is exactly what I was looking for. Congratulations!
This is an outstanding tutorial.
Fantastic guide - clear, well sized steps, i love that install script, well documented, use cases! Thx!
Btw, i like how to model at the end of the video is sure (more or less) that your guitar is a person or a backpack! :D
Let's get that next video! The people need the next videoooooooo
Can you please tell me why my camera window is not showing? for webcam
Absolutely great guide. Worked perfectly on Raspberry Pi4 8GB with Stretch installed!
Thank you very much.
Thank you 🙏 very useful tutorial
I am forever grateful for these video tutorials. Thank you
Hi @EdjeElectronics ! I have followed your tutorials for a project of mine. I have encountered some errors. Can you help me. I have followed you on twitter.
Is this something that would benefit being on a cluster? One Pi for the camera, one Pi for the processing?
I don't know anything about tensor flow or Pi clusters, just curious.
Reading in a frame from a USB camera vs reading it in from another Pi isn't really a difference in performance.
But other processing steps after the detection might be heavy enough to benefit from multiple Raspberries.
Good question! No, I don't think a cluster would help for this. The main chunk of processing occurs when passing the image through the neural network to find the detected objects, and there isn't any (easy) way to split that between multiple Pis. And couka is correct that using a separate Pi to handle the camera wouldn't really help. I already have the camera running in a separate thread to speed things up (see www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/ )
This looks like just what I need for a project. Thank you for this. Very good video.
58% chance his guitar is a person.. lmao 😅😅😅
Near the end...
@@jmart6438 can not compute
@@jmart6438 pretty sure I was cracking a joke... 🤔
✌️
@@sheepleslayer586 they deleted their comments lol
this is the best tutorial ive seen on youtube, thank you so much !
I followed the recommendation, below in the comments, to install tensorflow 1.14 after running the requirements script. Everything works and my Pi4 4GB is giving about 5fps with the google sample.
Nice to watch this video on UA-cam! Thank you!
This was my first click researching a project and I live on one of the cross streets shown in the beginning of the video. So random! Helpful video too.
Nice! Feel free to say hi if you ever see me in Bozeman :)
Thank you Jessie Pinkman
@Edje Electronics I just want to say a big thankyou for your work of putting this tutorial out there.
I have designed and constructed a Autonomous Mobile Robot which is 95% 3d printed that uses tflite to identify and exterminate weeds. I couldn't have done it without your help! If I'm ever in your neck of the woods. I would like to thankyou in person. Hello from a final year mechatronics student in Port Elizabeth, South Africa!
That's awesome! Thank you for letting me know, I'm glad this video was helpful. Keep up the good work!
Hello Mr. Radnartjie,
Trust you are well. Hey, I was wondering how you ran the object detection headless. Did you run this program on an IDE like Thonny / Geany? I'm trying also to build an Autonomous Mobile Robot that uses object detection but I can't seem to find how to run this program other than on the terminal... Mr. Radnartjie, I would be really grateful for some advice.
Can I download you're bird squirrel and racoon model anywhere?
I really love your channel. I will also credit your Github repo in my project submission.
Keep up the awesome work
It will be really useful to know How can you toggle GPIO when certain object is detected? Thanks.
You the real MVP keep making content!
Is there a way to do text detection/capture? For example, reading street signs?
Wow! The best guide for TensorFlow Object Detection! Thank you sir!
How was your setup right at the beginning of the video in the car? How do you recorded the screen? what type of connection do you used to connect to the pi?
thanks for the cool tutorial!
I had my Pi plugged into a monitor and recorded the screen using this HDMI recorder: www.amazon.com/gp/product/B00KMTYPXC . Looks like it's no longer available on Amazon, but you should be able to find something similar!
@@EdjeElectronics Thanks!
Thank you for this video. This appears to be the material I needed to run a tflite object detection model from a pi cam.
Hi, nice video, is it possible that when detecting a bird, turn on an LED light or send a pulse?
I have a similar project, Pi will automatically track down the object e.g. Raccoon or human for my project(you can train your own model use OpenCV), and "fire" laser on the target and sound the alarm.
My project is based on this: www.pyimagesearch.com/2019/04/01/pan-tilt-face-tracking-with-a-raspberry-pi-and-opencv/
This worked brilliantly. My pi 4 is setup to work with the Sunfounder Picar-x and was a little doubtful if your project would play along with their setup. Luckily, it worked seamlessly on the first attempt using your setup scripts and the default models. My picam is doing 20-24 FPS and I’m just amazed.
My end goal is to have this Picar-x to roam around the house without colliding into anything and to annoy my cat to do some exercise (she is on the bulkier side)
Thanks, I'm glad to hear it works well! Do you know what version of Raspberry Pi OS you were using? I'm working on updating some of the scripts to work without errors on the latest Raspberry Pi OS.
Can we use this to make smart traffic light differentiating between a normal vehicle and an emergency vehicle such as an ambulance? Can you make a video to demonstrate or help me out through any link. I will be obliged.
Yes you can, that would be a cool project! I don't have time to help, but check out my Pet Detector video, that might give you some ideas for how to control a program based on what is detected. ua-cam.com/video/gGqVNuYol6o/v-deo.html
Thank you! I really appreciate your efforts in clearing up how to get this working. So far things are working great after your set up instructions. I will be trying to set up some custom objects to detect and passing the locations via I2C to an Arduino. I'm looking forward to trying it with the USB Coral unit soon.
Gregory Mazza hey Gregory, curious to know what kind of objects you are trying to detect. I’m working on my own algorithms and was wondering if you’d like to share information, thanks. My email is jatinderm19@gmail.com.
I did this 2 years ago and it was an nightmare. It was still fairly new and you had to find patches for the patches. You made this ridiculously simple.
Thanks! It's a pain staying on top of all the version changes. I did my best to make this one easy to follow and future-proof to new versions!
Can I ask that can we train our own model on Tensorflow Lite ?
As I have followed your previous tutorial for training my own model on Pi 3. It was good but in slow speed
Here's my GitHub guide showing how to train your own TensorFlow Lite detection model! github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
Nice Job! Had a issue reviewed the comments reinstalled Raspbian, followed the video all working, thanks for sharing
Hi Edje I have a problem about line 122 Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 122, in
with open(PATH_TO_LABELS, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflitel/Sample_TFLite_model/labelmap.txt'
Had the same problem, I just created the /home/pi/tflite1/Sample_TFLite_model/ folder and moved the labelmap.txt and detect.tflite from the tflite1 folder into it!
Thanks man I was looking for something exactly like this
Hi ! I'm run tflite on Raspberry Pi 3 B+. Why i get 0.6-0.9 fps? Can you help me for more fps?
0:02 Hotel Baxter?! HOLY SHIT! It's my home town of Bozeman!
Haha yep!! I'm from Great Falls originally but living in Bozeman now. It's a great place to live! Check out my Raspberry Pi 3 vs Raspberry Pi 4 video, it's mostly footage of me driving around Bozeman :) ua-cam.com/video/TiOKvOrYNII/v-deo.html
Once I formatted my NOOBS and started fresh your tutorial worked perfectly. Honestly, I started here, I'm going to go back and do step 1 now. The documentation is excellent. You've given a lot to learn and it's walked through for the non-pro like myself. Excellent work
Thank you! I tried to make the instructions as straightforward as possible. Glad to hear they are working!
Yeah it works, pretty cool
This was perfect and works fabulously! Far better than the official Google coral documentation which I haven't been able to get working yet.
When you have time...a video on how to access GPIO pins and activate them or to activate another program based on a detected class would be super helpful. I'm having trouble figuring out how to turn the results of a detection into concrete effects (if bird detected, take a photo and if squirrel detected turn a gpio high and take a video to record the fun). Thanks for all the hard work you put into these videos!
Thanks, I'm glad the videos are helpful! I'm hoping to put out a video soon that will give an example of toggling GPIO when certain objects are detected. Really hoping to get started on it this weekend! I also want to do a video showing how to trigger video/audio recording using ffmpeg.
@@EdjeElectronics Yayyy, looking forward to the former !! Great content
@@EdjeElectronics In case you haven't seen it, Pyimagesearch has a nifty KeyClipWriter that looks like it might be a good way to record the video, not just of the action frames but storing the frames in a buffer and saving the entire event to video including the frames prior to and immediately after the event is detected. That blog post is "
Saving key event video clips with OpenCV."
@@jasondegani Thanks for the heads up, I will check it out! I love PyImageSearch 👍
can u do this on an old pc or laptop aswell? and can you accelerate this process with a graphics card? @Edje Electronics
Really the best guide i found . Thank you
great instructions! I use the pi4 on 64bit mode, idk if that is related or not, but, I did have a issue with the version of opencv not being installed, this was resolved by :
pip install --upgrade pip
pip install opencv-python
just posting this if anyone else gets that this should do the trick for no matching distribution
What an amazing tutorial, thanks man👌🏻👍🏻
I'm more interested if it can read and log license plates.
this whole video is blowing my mind.
Hello Evan! thank you very much for your tutorial, it was a great pleaser to learn from you. Hope you will do more projects like that!
I successfully repeated your project with my custom model for one month ago (I got my model from google cloud). Yesterday I built another model with different dataset and got some trouble with implementation. The error says next:
ValueError: Op builtin_code out of range: 130. Are you using old TFLite binary with newer model?
I found out they updated their conversion with TensorFlow 2.5 runtime. I guess this is the problem, may be you know how to fix it?
I tried update manually tflite-runtime package, but it did not help
@@GenadiJai Thanks, I'm glad the tutorial has been helpful! Hmm, if you updated tflite-runtime and you're still getting that error, then I'm not sure what the problem is. Can you check the version of tflite-runtime you're using on the Pi and the version of TensorFlow that you used for building your model? You should be able to use this to check the tflite-runtime version:
import tflite_runtime
tflite_runtime.__version__
@@EdjeElectronics thank you very much for your response.
The version of tflite_runtime on raspberry pi is 2.5.0
and Google cloud uses TensorFlow 2.5.x (latest patch)
cloud.google.com/ai-platform/training/docs/runtime-version-list
package list
Thanks so much for this! Far better than the google documentation which I found to be as clear as mud
For those having the following error:
(tflite1-env) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=Sample_TfLite_model
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 122, in
with open(PATH_TO_LABELS, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflite1/Sample_TfLite_model/labelmap.txt'
Remember that the model files have been unzipped in Sample_TFLite_model and not Sample_TfLite_model or Sample_Tflite_model for that matter. Just make sure that you type *TFLite* correctly, and you're good to go.
Thanks, this is exactly what i needed to get started with TensorFlow
hey, thanks for the video it really helped me a lot.
but i have a question , how can i detect from any website like from url of youtube .
please help me i have to complete my project and i am confused ..........
and again thanks for the video.
Use web scraping...I guess that'll help.
This is a awesome tip bro
Thank you
I need to deep dive a little bit to make is work :)
Can someone help? Im trying to control a servo motor once TF detected a specific object. Thank you
Sorry i dont know that
Anybody figured how to toggle GPIO in real time when XYZ object detected.
Are you planning to use MQTT to start/stop the motor? That will work.
Clear. Concise. To the point. Great video! Looking forward to more.
Liked. Subbed. Smashed the bell (HARD!)
Works in the pi zero ?
Thanks for the video, however having many troubles installing get_pi_requirements.sh. getting unable to locate, [Errno -3] Temporary failure in name resolution')':
Thanks for doing this man! Really great stuff.
9:50 I'll be waiting
Still waiting :))
I am recreating your turtorial this week!
I recently updated some of the setup scripts to work with newer versions of Raspberry Pi OS. (With Raspberry Pi and TensorFlow always releasing new versions of software, it's hard to stay on top of it all.) Everything should still work when following the instructions in this video. Please let me know if you run into any errors!
hi i have arasberry pi 4 b 64 bit os and im getting this error at the very end when trying to run it. I am using a high quality pi camera
[ WARN:0] VIDEOIO(V4L2:/dev/video0): can't open camera by index
Traceback (most recent call last):
File "/home/pi/tflite1/TFLite_detection_webcam.py", line 171, in
frame = frame1.copy()
AttributeError: 'NoneType' object has no attribute 'copy'
i run it at virtualbox with raspberry OS Desktop 32Bit. the tensorflow cannot installed. it says caould not find a version that satisfied the requirements tensorflow (From version: )
I have a question... How to change the rotation of the camera ? Mine is too much rotated ://
@@georgoschalkiadakis2402 did you got it resolved?
Amazing vid! I feel like this is the start of an amazing channel.
Couple of questions : I have a rpi 4 as well with rpi cam.
I wanted to setup the rpi as a basic IP cam for streaming only, no recording but the fps is extremely low (15fps max) . The idea was to see how high it could go. So I guess I'm asking how high it could be and also in the last seconds of this video did you achieve 20fps with the coral connected?
Finally could it be trained to identify people?
Thanks. I'm now wondering about setting up tensor flow 24/7 on the house server to monitor the babies 🤣 maybe make a video on that ❤️
Could it be done in ubuntu mate? I have a rock64 and im curious if it gam be done on a raspberry like board
Yeah, it should work there. Raspbian and Ubuntu are both based on Debian after all. And, I'd be surprised if your PC doesn't hold up to a Raspberry Pi. All the steps should be the same.
Many thanks for this! I could use some of this in my diy smart home!
Subscribed!
Hey im running the bullseye os on a raspberry pi 4 B. I can't seem to get across the problem regarding running the .sh script
Same here
I think that part of the problem is that there are new versions of the programs that are being downloaded in the .sh that haven't been updated and so aren't working/downloading correctly. But I can't figure out which ones they are to get the updated ones.
Thank you SO much! Your videos and guides are the best out there. I can’t wait to see your Coral vid!
Thanks man, I appreciate it! The Coral video will be out in a few weeks 😃
hi can I know how to write if labels= person it will rotate the motor and if not it will continue running ?
Hello, please watch my Pet Detector video. It explains how the variables work and gives an example of how to trigger actions if certain objects are detected. Good luck! ua-cam.com/video/gGqVNuYol6o/v-deo.html
Just got this up and running!!! Just fantastic!! Had to uncomment some lines in the config.txt for my VGA monitor.
could you help me with some bugs i'am having?
,@@nectaligironperdomo7219, What step did it bomb out on? Do you have and error messages?
any not and --- I used a Raspberry Pi 4 with 4gb ram
Hi, the tutorial is relly great, but is there an option to access the raspberry gpio`s?
Can somebody help me please. I am under a little time pressure.
Ok i found a solution.
Activate the virtual enviroment
=>
cd tflite1/
source tflite1-env/bin/activate
pip list #shows all installed packages
pip install rpi.gpio
@@stefanm2059 Thanks for sharing your solution! 😃
Thank you so much! I have all the components for Rpi 4 + Coral, so very much looking forward to your next installment.
Man! It's awesome! Can i use my model i trained in teachable machine site? Since now, thank you.
Thanks! Teachable Machine creates an "image classification" model rather than an "object detection" model. This video only works for object detection models. You can look at this GitHub page to see how to set up an image classification model on the Pi! github.com/tensorflow/examples/tree/master/lite/examples/image_classification/raspberry_pi
@@EdjeElectronics actually i'm trying to run model for object detection i have trained there, Teachable Machine. I took model.unquanted.tflite, model.tflite and label.txt. And now i'm not getting to run my model in my Android device. I put the three files in assets folder but when i running the app nothing happen. After the android app works fine, i want to run in my raspberry pi.
Great tutorial video for me! thank you very much for making this video.
github keeps asking me to login when I try to download the packages and it keep rejecting it. what should I do?
I am having the same issue.
Check the link you’re using. A git:// url requires a login, an url doesn’t.
Nice video! Is there a way to let this detect numberplates from a video or pictures and pixelate them?
yeah ofc
@@weslyvanbaarsen666 you know how? I'm not programming a lot and I don't know how rn
@@DashcamDriversGermany well you would use the tf api to actuate on by applying a pixel effect on the detected object region
thanks a lot for this video but i just face some problem with this
python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 19, in
import cv2
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/cv2/__init__.py", line 3, in
from .cv2 import *
ImportError: libjasper.so.1: cannot open shared object file: No such file or directory
I am having the same issue
I solved the problem by downloading this version of the model instead :
wget storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
and unzip:
unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip -d Sample_TFLite_model
Great video, got it working on my RPi3 + Pi Camera. Just getting 1 FPS but hey, it works! :)
Hi, Edje thanks for the tutorial, the object detection works or certainly looks perfectly fine to me but after I run it, at first it says :
' HadoopFileSystem load error: libhfds.so: cannot open shared object file: No such file or directory '
Could you please help me solve this issue :)
A few people have gotten this error! I haven't had time to look in to it yet. Can you tell me which Raspbian OS you are using? Buster or Stretch?
Edje Electronics Buster, 4.19
@@EdjeElectronics I am also getting this same error on Raspbian GNU/Linux 10 (buster)
I'm also getting this error on Buster. Any straight-foward solution yet?
wow... i love this one... Thank you so much!
I created a Google Colab notebook for making your own TensorFlow Lite model with custom data! You can train, convert, and export a TFLite SSD-MobileNet model (or EfficientDet), and then download it to your Raspberry Pi and use as shown in this video. I'm still working on the video that walks through the Colab notebook, but please try it out if you're interested!
colab.research.google.com/github/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Train_TFLite2_Object_Detction_Model.ipynb
You are a lifesaver, thank you!
You're very welcome! Were you successfully able to train a model with the Colab notebook? It hasn't been tested by many other users yet, so I'm curious to hear if you ran in to any errors or issues.
@@EdjeElectronics Well I wanted to train a clothes classifier using FASHION-MNIST, so I'm still in the process of figuring out how to change that dataset to fit the colab notebook.
In short, not succeeded yet, but haven't had the time to properly test it, so fingers crossed!
@@casualjay7428 Oh! Actually, my guide won't work for that 🙁. My guide is for "object detection" models, while the FASHION-MNIST dataset is used to train "image classification" models. Here's a good guide from TensorFlow on training a basic classifier on the FASHION-MNIST dataset. www.tensorflow.org/tutorials/keras/classification
@@EdjeElectronics Oh I see! Thank you! I'm learning a lot so I still see this as a win!
Great Tutorial.. also works well on the Jetson Nano
can i adjust the code to detect only 1 specific class like a person?
yes, use google
Wonderful tutorial, thank you!
I am new on this and perhaps this is a silly question:
I am running a headless rpi connecting via ssh, I've done everything on this tutorial except the last part where I've to execute the python code. But when I run it "python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model"
I got this message:
": cannot connect to X server"
anyone has faced the same issue? is it correct run the python code over ssh? if not, do I need the raspberry desktop version instead?
Thanks in advance!
Unfortunately, it doesn't work with a headless RPi connected over SSH. The "X server" error message occurs because it's trying to display an image to the screen, but there is no screen. You'll have to either use a desktop version, or modify the code so it just saves image files instead of trying to display them.
Nice cat picture btw 😺
@@EdjeElectronics Many many thanks mate, now I get it, I also did some research in blogs and they pointed out to the same.
About my profile pic, long live cat lovers 🐈 haha 👍🏻
Cheers!
Wow bro, so many tutorial in UA-cam is unique and fitted for my next project, if you have similar like this but using pytorch is high appreciated