Feel fortunate! I just so happen to be setting up my raspberry pi and coral usb accelerator for the first time TONIGHT, of all times :P thanks for the content!
Excellent video. Follows the written guide very well. I had already followed this guide and moved onto the how to train models on/for raspi. Really looking forward to that video, i tried the written and cant get it to work correctly for some reason. I have a fixed wing drone with a few hours of flight time soaring over the mountains everyday, and i want to train models based on the sky camera perspective. With proper programming ill have it automatically enter into holding patterns if certain animals or people are seen. Ill tag you in the video for credit once its up! Looking forward to the last video to finish this series!! Thanks bro
Thanks! That sounds like a VERY cool project, I look forward to seeing it. Feel free to comment a link to it on this video once you've got it done! My next video will just show how to convert a TensorFlow Lite model into an Edge TPU model using edgetpu-compiler (it will only be a 3 minute video). I am going to make a series of videos stepping through how to train a custom TensorFlow Lite model, but it won't be until later this summer when I start working on them.
A Raspberry Pi 4 with the Coral Tensorflow accelerator is a great alternative to the NVidia Jetson Nano (completely sold out!). I'm going to give this a try because I can't afford to wait 18 weeks for them to be made in China and distributed to North America. Thank you for putting these great Tensorflow Light demonstrations together. Really great production quality.
@@EdjeElectronics I'm really looking forward to it! Your guides have been INCREDIBLY helpful. I've been able to have a working prototype but stuck at being able to compile a custom model. Thank you for your guides!!
I appreciate your videos very much. Do you have any plans to make one with Raspberry Pi 5 + TensorFlow Lite + Coral Accelerator USB or mini PCIe? I know there is something around already, but I think your's are the most professional ones.
And just like that I have everything working! Running the coral on std I am getting about max 16FPS, but I think this might be because I am connecting to the pi via AnyDesk which might be taking up memory. Sincerely, thank you for sharing your projects with the community, I find it pretty remarkable that it was so easy to get set up. Quick question: Does your previous tutorial on how to train your own tensorflow lite model NOT apply when you are using the coral USB accelerator? Is that the next video you are releasing?
Awesome! Glad you were able to get it all working. This tutorial (see link) does work for training your own TensorFlow Lite model which can then be compiled to run on the USB Accelerator. My next video will just show how to compile the TensorFlow Lite model for the USB Accelerator. github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
I had no idea Google was making something like this. I've been using Nvidia GPU's for some time with Tensor..but this really piques my interest in devices specifically optimized for tensor models. Great video..I think I'll throw some parts together this weekend and try your non-edge tensor setup first. I doubt with all the shipping delays happening I'll be able to get a edge tpu very quickly to eval.
Great content, thanks! Very interested in tutorials on how to train your model/improve the accuracy of the existing one and on how to forward the output of the model to the app for notifications. *edit: found the link to your colab file, thank you
Hi Edje, Thanks for the great tutorial! It is really well explained. One question, is there a way to let this program run on boot up? if so, what is recommended?
Great video, adding to a great series! I'll be buying the USB Accelerator from your link. Like others here I'm using Goolge Colabs in conjunction with your videos as it simplifies the process for the less experienced amongst us. I noticed that on some of your clips the frame rate is about 19/20 FPS and for others its over 30, was just wondering if you knew what was making the difference? My particular project is to make an automated goalkeeper for subutteo football, so the quicker i get the FPS ,the less blurry the image captured, the more likely the pi camera will identify the ball and save it! Thanks again for the great tutorials.
Good question! The video that has 30FPS is only 640x480 resolution, while the video at the end is 1280x720 resolution. The lower the webcam/video resolution, the faster it will run. Your project can probably use a 640x480 resolution, so try that! You can set the resolution to 640x480 by using the --resolution argument: "python TFLite_detection_webcam.py --modeldir=Sample_TFLite_model --resolution=640x480 --edgetpu"
Yes, it definitely would be able to do it in real time. However, the accuracy might not be as good as you'd like. Lightweight models like MobileNet aren't very good at distinguishing between visually similar objects (like a finch vs a sparrow). You should check out this bird classification model on TF Hub (it even lets you upload your own images to test out): tfhub.dev/google/lite-model/aiy/vision/classifier/birds_V1/3
Your documentation is still working. Ubuntu 18+ Rpi4 +coral usb rocking. It would be great if you have any garden or farm related model for edge TPU. I'd like to buy a super thanks for your but I couldn't find options. Awesome work
Awesome, thanks for letting me know it's still working! It's been a while since I've tried it out. I don't have any garden or farm models, but you may be able to find one on Roboflow Universe or similar. universe.roboflow.com/
Thanks for the videos! Very helpful. I'm a newbie and will experiment at creating a burglar detector. Every year or two, we get a bunch of bored kids visit our subdivision and try to burglarize vehicles. I might build a system that monitors the street and looks at cars, deer and turkeys, and pedestrians. If it's between midnight and 5:00a and one or more pedestrians are detected, I'll get notified. I'll share any interesting findings. Cheers!
Excellent! thank you so much for putting work on these videos mate, I truly appreciate it. And as a sign of appreciation, buying now with your link. May I suggest another video? Since you mention it is necessary to compile with a linux machine, could you make a video on how to make a Bootable, Persistent Linux USB? im struggling with that as I have a laptop which I use for work and I dont want to risk an error re partitioning the windows bootable drive. Seems like The Bootable USB option is the way to go but then some methods are not persistent or the space allocations are limited. Thanks again for your work!
Thanks for your support! My next video will show how to either create a Bootable Ubuntu Linux USB or install an Ubuntu virtual machine on your PC. I'm still figuring out which method is easiest and most robust (i.e. will result in the least amount of errors for users 😅). I love my bootable Ubuntu USB drive though! Here are some great instructions on how to set one up, straight from the Ubuntu website: ubuntu.com/tutorials/tutorial-create-a-usb-stick-on-windows#1-overview
Thank you for the video! Big Thumbs UP! Did you use a high speed (5gbps) usb-c cable to get up to ~30-40 fps? I only get around 20fps using the google coral
You're welcome! Yes, I used a USB 3.0 cable to plug in the Coral USB Accelerator. The reduced framerate you're seeing might be because you're running at a higher resolution. I get 30-40 FPS when running the camera at 640x480, and about 20FPS when running at 1280x720.
@@EdjeElectronics The google coral came with a usb cable. Did you purchase a separate one with higher data transmit speed? I see. I'm trying to design a mask detector using rpi4+coral and display it on a 50" TV. Would using a bigger screen reduce the fps?
@@liamhan8145 Nope, I just used the cable that it came with. And no, using a bigger screen will not reduce the FPS. (However, the video might look kind of grainy or blurry.) I've been developing a mask detection camera at work, and we're going to open-source all the code for it in a couple weeks! I'll share that with you once it's ready.
@@EdjeElectronics That would be amazing (: Thank you! As of now, I am using MobileNet SSD v2 (Faces) from coral.ai/models/. And I used transfer learning to add couple layers at the end for mask detection. With these two models, I'm trying to first detect a face and then once finding the ROI, pass that through the mask detection. There's been a lot of nice videos of others who did this. But none of them uses google coral so the speed is very slow. I'm guessing you might be planning something similar! In terms of implementation with google coral, both models just have to be in edge tpu format right? And then just pass them through the coral? Thank you!
Here's Google Colab session that will allow you to compile a TFLite model into EdgeTPU format. You just need to upload your .tflite file and run the commands. If I make a video on this, it will be a quick one! colab.research.google.com/drive/1o6cNNNgGhoT7_DR4jhpMKpq3mZZ6Of4N?usp=sharing
@@EdjeElectronics Hi, i cant compile tf from source, exist any alternative? im lost last weak trying on diferent computers and versions but no success, i already have .pb and .pbtxt... Can help you give some help?
@@luisFelix I couldn't compile TF from source last time I tried either! I guess I was lucky when I got to work one year ago :) . Here is a link to a Colab that will allow you to convert your .pb model to a .tflite model. colab.research.google.com/drive/1Px7I6PxeeLhCepyA9Dv22pwz66NuT1JR?usp=sharing
I have a pi4/usb accelorator set up and would like to retrain the model to recognize a new object, which i have hundreds of pictures and annotated pic for. Is there a tutorial that explains how to best do this? Thanks!
hey, sorry I cant tell your name but I was wondering the same thing (interested in a similar problem for and idea I am thinking about). I took a look at his code and found that it would be fairly easy to print out the labels found on object detected to the terminal by just editing the python script he is running that displays the camera feed with the labels and scores. If you look at this file: github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/TFLite_detection_video.py#L138 you can see the detected "labels" of the objects seen in each frame and a simple print statement for each item in that label array would do it for you, you could also print the corresponding item in the scores array to the terminal if that would help as well. Contact me on twitter @contractorwolf if you are still stuck
Yes! I still need to look at Colab more, but I'm thinking of using it for when I do my video series showing how to train custom TensorFlow Lite models. I'm a little hesitant because Google Colab doesn't give a consistent amount of processing power to each user, and it's liable to change at any time. But I'm definitely looking in to it!
Edje Electronics yea that is true the GPUs you get can be quite in consistent. Another thing is also Google Cloud Vision it’s a lot more automated, but still worth looking into. Good luck and stay safe in the meantime. Thanks for the reply!
Nope 😭 unfortunately, the Edge TPU only supports running SSD-MobileNet detection models for now. They might add RCNN support in the future. You should also keep an eye on EfficientDet, the newest state-of-the-art lightweight object detection model from Google.
Awesome! Works like a charm. Somehow on a RP4 with 4GB ram video tends to freeze and go, freeze and go,... while the fps keeps saying ~20... A problem that doesn't happen on the RP4 with 8GB ram...
Hmm, I haven't used a Nest outdoor camera before so I'm not sure how they work. Do they stream the video feed over an IP? If so, it might work. You'd have to set up your Raspberry Pi to grab the stream from the Nest camera and then process it with TensorFlow. Here's the code that lets it work with a web stream: github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/TFLite_detection_stream.py
Unfortunately, you can't just add an object to existing models. You have to retrain the whole model from scratch with data from the old model plus the new data!
@@EdjeElectronics Would you happen to have a tutorial on how to make a custom object detection model from scratch, converting it to tflite so it can run on the Coral+Pi? Thanks, btw.. your videos are great and you're a good teacher.
Looks like this no longer works for raspberian 11 bullseye. Is there a way to fix that? Looks like its an issue with opencv and it not finding the camera. I tried using the camera in legacy mode but no luck
Hi Evan, i ran your card model generated from Windows 10 on Pi 4. Although able to detect, the FPS was extremely slow. Have you implemented it with Coral TPU? Although I am a 70 years old man, I have learned a lot from your previous videos. Do you have any new videos coming soon such as using Colab to train a model?
I don't know much about the Dev Board, so I'm not sure. They have some good information on the website about what kind of projects you can do with it (see link). I didn't know they couldn't ship to some places in France 😞. Are there any EU websites you can purchase the USB Accelerator from? coral.ai/docs/dev-board/get-started/
@@EdjeElectronics Hum, i tried on a lot of EU websites and they all said that it wanst shippable except one but it said that they would send me an email if they can ship it to my country . If they doesnt, i guess i'll order the dev board and i'll use the coral docs, thanks
Can the RPi+Coral usb accelerator combination support regular tensorflow library rather than the lite version? I am using a python library that requires TensorFlow 2.1 and I don't know if using TF lite will work for the application.
Yes, if you are using a 2.X version of TensorFlow, it will have the compatible TensorFlow Lite libraries built in. My code automatically handles importing packages from the correct TensorFlow library regardless of which one you have installed.
Hey Edje, my raspberry pi is having a little bit of trouble when I try to test the sample edgetpu. When ever I type in that last command, the message VIDIOC_QBUF: Invalid argument continuously pops up. Think u can help?
Does a window still appear with a live camera feed and detected objects drawn on each frame? Or does the program not run at all? Either way, it might be an issue with your webcam. Try borrowing a friend's webcam to see if the error still occurs!
Edje Electronics hey, so I switch from the pi camera into a usb camera and when I now try to test the sample edgetpu, it gets stuck on /home/pi/tflite1/Sample_TFLite_model/edgetpu.tflite.
really nice video. I hope u can make a video on how to apply this image processing to make autonomous car. can we make conditions for the autonomous car based on detected object?. if can, I really want to know how. :D
Hi, sorry am quite new here for the Tensorflow. Can I know is there a way to output the result of the object detection and their percentage into a text file? Thanks
Good question! That video has moved way to the backburner for me. I did just create a Google Colab guide that you can use to compile Edge TPU models. If you have a quantized TFLite model, it's as easy as uploading the .tflite file to Colab and running the compiler. Try it out! colab.research.google.com/drive/1o6cNNNgGhoT7_DR4jhpMKpq3mZZ6Of4N?usp=sharing
How do I get a quantized TFLite model. I followed your tutorials and have a detect.tflite file that runs. However, in the google colab it says not quantized. Is there a step I missed?
Hi Edge, hope to have good health when saw it. I was following your tutorial and while I connect my Coral USB, the LED on it does not light up???? And when i try to run command with --edgetpu, the "Failed to load delegate from libedgetpu.so.1.0" was appered. my Coral USB has problem?
Did you maybe try to test the TFLite model's performance using implementation with C++ API for TF instead of Python? It can be interesting if it would increase performance
I haven't tested that! You're right, it would be interesting to see how much the performance increases. Unfortunately, I just don't have time to try it out!
it would be interesting to see how well faster_rcnn_resnet_101 runs on coral usb. The last time I tried to run that on pi 3B+, I wasn't able to convert the model to tflite. I may have to try it again, since I haven't tried in over a year.
Unfortunately, TFLite does not support Faster-RCNN models. It only supports SSD-MobileNet models. Maybe Google will update it to support heavier models some day!
@@EdjeElectronics bummer, I was hoping they would add support for it eventually. A couple of recent papers in 2019 were making gradual progress on neural net compression. I've been thinking or asking myself this question, "what if you remove the residual layers after the model is trained?" One of the primary benefits of resnet is reducing vanishing gradient during training. If we remove the residual layers, you probably wouldn't be able to retrain the model, but it might make it easier to convert to tflite.
hello, could you help me with this error? usage: TFLite_detection_video.py [-h] --modeldir MODELDIR [--graph GRAPH] [--labels LABELS] [--threshold THRESHOLD] [--video VIDEO] TFLite_detection_video.py: error: unrecognized arguments: --edgetpu is weird because the tensorflow lite is working thanks
Hmm, I think you have an older version of my code that doesn't support the Edge TPU. From inside the tflite1 folder, try issuing "git pull github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi.git". That should update your local files with the newer files from my repository.
First off great video and tutorial @edjeelectronics, your method of explanation and documentation is really terrific. I am currently up and running with the TPu, but only getting ~18 FPS with nothing else running. This is a fresh install of Raspian Buster with nothing else installed and only went through the setup to get it up and running without the TPU first (which performed at ~5 FPS). Does anyone have any tips to get the FPS up to something closer to 30 FPS without going to Max? Thanks in advance.
@james wolf Thanks! Which model of Raspberry Pi are you using? I used the Pi 4 4GB model for this video, and I do think the extra RAM helps it run a bit faster.
@@contractorwolf Weird! Do you know if you're running at a higher resolution (1920x1080 instead of 1280x720) maybe? Sometimes a webcam will automatically force a high resolution. Also, do you have the TPU plugged in to a USB 3.0 port?
@@EdjeElectronics I am using the normal PiCam and running the Coral from the USB-3 port (blue). Would the PiCam have a slower refresh rate or something?
hi, there in this tutorial How To Train an Object Detection Classifier Using TensorFlow (GPU) on Windows 10 after termination of cmd what are the exact commands that we use. I am getting an error in first line of ipynb file i.e. ImportError Traceback (most recent call last) in 15 # This is needed since the notebook is stored in the object_detection folder. 16 sys.path.append("..") ---> 17 from object_detection.utils import ops as utils_ops 18 19 if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): C:\tensorflow1\models esearch\object_detection\utils\ops.py in 26 from six.moves import zip 27 import tensorflow.compat.v1 as tf ---> 28 import tf_slim as slim 29 from object_detection.core import standard_fields as fields 30 from object_detection.utils import shape_utils ImportError: No module named 'tf_slim' How to solve this problem help plz
Hello , I try to use Led for person detection . Install RPi.GPIO on my python3 but I get a 'ModuleNot FountError: No module named 'RPi' ' . After I try to install RPi.GPIO on virual env but again same error. Please can you help me for this problem.
hey wentao look at my question (I have the same issue). I found that my RPi was failing on the SD Card Speed Test and that my affect the possible FPS when running it. I am interested to see if your SD card fails the same test as mine, that might be the issue? Let me know. The test is here: Menu > Accessories > RPI Diagnostics >SD Card Speed Test
@@Spreme91 did not seem to help me, passes all diagnostic tests now with the faster card but still can only achieve ~18fps with the cable that comes with the Coral, I made changes for my app to his original code that drops my performance down to like a 12fps (doing calculations to find the largest identified object and writing a bunch of data to a tiny TFT). Not the greatest performance, but good enough for my project. Let me know what you are doing or if you find any tweaks that help. @contractorwolf on twitter
Everything worked perfect except bounding box is not directly on the objects.placement of rectangle box is away from the object detected.can any1 know how to solve it?
I have a quick question, with the coral USB accelerator why am I only getting 10 fps. Note that before the coral accelerator I was running 0.9 fps. I have a raspberry pi 4 with 4 gb of ram. I do not know why it is running so poorly.
I had the same issue, and I doubled my frame rate by using a high-quality USB 3 cable. The cable that came with my Coral USB accelerator may have been damaged or defective.
i think can use a RTX2050 for this job it haves TENSOR CORES inside by CUDA CORES and WAY MORE CHEAP(gives 2x more performance than same priced tpu) AND WAY MORE EFECTIVE WAY(when nvidia drivers available to linux arm there is hardest part)
Clicked the like button before playing the video. Your videos are top quality and very helpful!
Feel fortunate! I just so happen to be setting up my raspberry pi and coral usb accelerator for the first time TONIGHT, of all times :P thanks for the content!
Haha that's good timing!
Edje Baby!!! You're back!!! I love it! Hope you're doing well, Mann!!!
Have been waiting for this video for a super long time! Thank you!
Thanks for your support! I'm glad to finally have it finished 😁
Much awaited video. Thanks for posting. Super cool. Stay Safe!
Excellent video. Follows the written guide very well. I had already followed this guide and moved onto the how to train models on/for raspi. Really looking forward to that video, i tried the written and cant get it to work correctly for some reason.
I have a fixed wing drone with a few hours of flight time soaring over the mountains everyday, and i want to train models based on the sky camera perspective. With proper programming ill have it automatically enter into holding patterns if certain animals or people are seen. Ill tag you in the video for credit once its up!
Looking forward to the last video to finish this series!! Thanks bro
Thanks! That sounds like a VERY cool project, I look forward to seeing it. Feel free to comment a link to it on this video once you've got it done! My next video will just show how to convert a TensorFlow Lite model into an Edge TPU model using edgetpu-compiler (it will only be a 3 minute video). I am going to make a series of videos stepping through how to train a custom TensorFlow Lite model, but it won't be until later this summer when I start working on them.
Dude thats dope good luck with that! You should look into Google Cloud Vision.
Great Tutorial :) It's not frequen that everything works exactly like in tutorial ! There were no errrors - great job.
Thank you so much. You did such a great job, and explained all related topics clear and easy to follow!
A Raspberry Pi 4 with the Coral Tensorflow accelerator is a great alternative to the NVidia Jetson Nano (completely sold out!).
I'm going to give this a try because I can't afford to wait 18 weeks for them to be made in China and distributed to North America.
Thank you for putting these great Tensorflow Light demonstrations together. Really great production quality.
Can't wait for your new videos!
Thanks! I've been crazy busy lately and haven't had much time to make videos... But I will get them done eventually!
@@EdjeElectronics I'm really looking forward to it! Your guides have been INCREDIBLY helpful. I've been able to have a working prototype but stuck at being able to compile a custom model. Thank you for your guides!!
I appreciate your videos very much.
Do you have any plans to make one with Raspberry Pi 5 + TensorFlow Lite + Coral Accelerator USB or mini PCIe?
I know there is something around already, but I think your's are the most professional ones.
Yes this please!
Excellent tutorial again!!!! Thank you so much for sharing!!!!
Can't wait for your new tutorial
Really cool and helpful video! Thanks a lot for making it! Your are great!!
you and your video excellent and amazing!!!
And just like that I have everything working! Running the coral on std I am getting about max 16FPS, but I think this might be because I am connecting to the pi via AnyDesk which might be taking up memory. Sincerely, thank you for sharing your projects with the community, I find it pretty remarkable that it was so easy to get set up.
Quick question: Does your previous tutorial on how to train your own tensorflow lite model NOT apply when you are using the coral USB accelerator? Is that the next video you are releasing?
Awesome! Glad you were able to get it all working. This tutorial (see link) does work for training your own TensorFlow Lite model which can then be compiled to run on the USB Accelerator. My next video will just show how to compile the TensorFlow Lite model for the USB Accelerator. github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
I had no idea Google was making something like this. I've been using Nvidia GPU's for some time with Tensor..but this really piques my interest in devices specifically optimized for tensor models. Great video..I think I'll throw some parts together this weekend and try your non-edge tensor setup first. I doubt with all the shipping delays happening I'll be able to get a edge tpu very quickly to eval.
Thank you so much for helping us 😘🙏
Great content, thanks!
Very interested in tutorials on how to train your model/improve the accuracy of the existing one and on how to forward the output of the model to the app for notifications.
*edit: found the link to your colab file, thank you
honestly you are my hero!!
Thanks man! One of these days I'll put out more TensorFlow videos... as soon as I get the time 🤞🤞
Hi Edje, Thanks for the great tutorial! It is really well explained. One question, is there a way to let this program run on boot up? if so, what is recommended?
Great video, adding to a great series! I'll be buying the USB Accelerator from your link. Like others here I'm using Goolge Colabs in conjunction with your videos as it simplifies the process for the less experienced amongst us.
I noticed that on some of your clips the frame rate is about 19/20 FPS and for others its over 30, was just wondering if you knew what was making the difference? My particular project is to make an automated goalkeeper for subutteo football, so the quicker i get the FPS ,the less blurry the image captured, the more likely the pi camera will identify the ball and save it! Thanks again for the great tutorials.
Good question! The video that has 30FPS is only 640x480 resolution, while the video at the end is 1280x720 resolution. The lower the webcam/video resolution, the faster it will run. Your project can probably use a 640x480 resolution, so try that! You can set the resolution to 640x480 by using the --resolution argument: "python TFLite_detection_webcam.py --modeldir=Sample_TFLite_model --resolution=640x480 --edgetpu"
loved your vid
Looks promising. Would this combination (pi4 with google Coral) be able to detect of which species a bird is out of a list of about 50 in realtime?
Yes, it definitely would be able to do it in real time. However, the accuracy might not be as good as you'd like. Lightweight models like MobileNet aren't very good at distinguishing between visually similar objects (like a finch vs a sparrow). You should check out this bird classification model on TF Hub (it even lets you upload your own images to test out): tfhub.dev/google/lite-model/aiy/vision/classifier/birds_V1/3
excellent video !
Your documentation is still working. Ubuntu 18+ Rpi4 +coral usb rocking. It would be great if you have any garden or farm related model for edge TPU. I'd like to buy a super thanks for your but I couldn't find options. Awesome work
Awesome, thanks for letting me know it's still working! It's been a while since I've tried it out. I don't have any garden or farm models, but you may be able to find one on Roboflow Universe or similar. universe.roboflow.com/
I am waiting for the next amazing tutorial.
Would running on Nvidia Jetson Nano improves the FPS? The usb accelerator doesn't justify the price
Thanks for the videos! Very helpful. I'm a newbie and will experiment at creating a burglar detector. Every year or two, we get a bunch of bored kids visit our subdivision and try to burglarize vehicles. I might build a system that monitors the street and looks at cars, deer and turkeys, and pedestrians. If it's between midnight and 5:00a and one or more pedestrians are detected, I'll get notified. I'll share any interesting findings. Cheers!
Liberal
Excellent! thank you so much for putting work on these videos mate, I truly appreciate it. And as a sign of appreciation, buying now with your link. May I suggest another video? Since you mention it is necessary to compile with a linux machine, could you make a video on how to make a Bootable, Persistent Linux USB? im struggling with that as I have a laptop which I use for work and I dont want to risk an error re partitioning the windows bootable drive. Seems like The Bootable USB option is the way to go but then some methods are not persistent or the space allocations are limited.
Thanks again for your work!
Thanks for your support! My next video will show how to either create a Bootable Ubuntu Linux USB or install an Ubuntu virtual machine on your PC. I'm still figuring out which method is easiest and most robust (i.e. will result in the least amount of errors for users 😅). I love my bootable Ubuntu USB drive though! Here are some great instructions on how to set one up, straight from the Ubuntu website: ubuntu.com/tutorials/tutorial-create-a-usb-stick-on-windows#1-overview
Thank you for the video! Big Thumbs UP! Did you use a high speed (5gbps) usb-c cable to get up to ~30-40 fps? I only get around 20fps using the google coral
You're welcome! Yes, I used a USB 3.0 cable to plug in the Coral USB Accelerator. The reduced framerate you're seeing might be because you're running at a higher resolution. I get 30-40 FPS when running the camera at 640x480, and about 20FPS when running at 1280x720.
@@EdjeElectronics The google coral came with a usb cable. Did you purchase a separate one with higher data transmit speed? I see. I'm trying to design a mask detector using rpi4+coral and display it on a 50" TV. Would using a bigger screen reduce the fps?
@@liamhan8145 Nope, I just used the cable that it came with. And no, using a bigger screen will not reduce the FPS. (However, the video might look kind of grainy or blurry.) I've been developing a mask detection camera at work, and we're going to open-source all the code for it in a couple weeks! I'll share that with you once it's ready.
@@EdjeElectronics That would be amazing (: Thank you! As of now, I am using MobileNet SSD v2 (Faces) from coral.ai/models/. And I used transfer learning to add couple layers at the end for mask detection. With these two models, I'm trying to first detect a face and then once finding the ROI, pass that through the mask detection. There's been a lot of nice videos of others who did this. But none of them uses google coral so the speed is very slow. I'm guessing you might be planning something similar! In terms of implementation with google coral, both models just have to be in edge tpu format right? And then just pass them through the coral? Thank you!
I love your videos......just brilliant
Any update on when you'll be releasing a video on how to install the compiler?
Here's Google Colab session that will allow you to compile a TFLite model into EdgeTPU format. You just need to upload your .tflite file and run the commands. If I make a video on this, it will be a quick one! colab.research.google.com/drive/1o6cNNNgGhoT7_DR4jhpMKpq3mZZ6Of4N?usp=sharing
@@EdjeElectronics Perfect, thank you. I will try this, since they discontinued support for 32 bit systems. Hopefully this works on my 32 bit raspian.
@@EdjeElectronics Hi, i cant compile tf from source, exist any alternative? im lost last weak trying on diferent computers and versions but no success, i already have .pb and .pbtxt... Can help you give some help?
@@luisFelix I couldn't compile TF from source last time I tried either! I guess I was lucky when I got to work one year ago :) . Here is a link to a Colab that will allow you to convert your .pb model to a .tflite model. colab.research.google.com/drive/1Px7I6PxeeLhCepyA9Dv22pwz66NuT1JR?usp=sharing
@@EdjeElectronics Perfect!!! big big tks!!!
Great , bravo !
Thank you!
I have a pi4/usb accelorator set up and would like to retrain the model to recognize a new object, which i have hundreds of pictures and annotated pic for. Is there a tutorial that explains how to best do this? Thanks!
3:35 If I were an AI, I'd say that you could achieve 480 FPS at 480°C
Hi, If I just want return the label in the terminal what can I do
hey, sorry I cant tell your name but I was wondering the same thing (interested in a similar problem for and idea I am thinking about). I took a look at his code and found that it would be fairly easy to print out the labels found on object detected to the terminal by just editing the python script he is running that displays the camera feed with the labels and scores. If you look at this file:
github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/TFLite_detection_video.py#L138
you can see the detected "labels" of the objects seen in each frame and a simple print statement for each item in that label array would do it for you, you could also print the corresponding item in the scores array to the terminal if that would help as well. Contact me on twitter @contractorwolf if you are still stuck
Very good content, thank you.
can you show us how to use the “stream” feature? i’m having problems setting it up
Hey Edje have considered using Google Colab for your tutorials?
Yes! I still need to look at Colab more, but I'm thinking of using it for when I do my video series showing how to train custom TensorFlow Lite models. I'm a little hesitant because Google Colab doesn't give a consistent amount of processing power to each user, and it's liable to change at any time. But I'm definitely looking in to it!
Edje Electronics yea that is true the GPUs you get can be quite in consistent. Another thing is also Google Cloud Vision it’s a lot more automated, but still worth looking into. Good luck and stay safe in the meantime. Thanks for the reply!
it is possible to export normal tf model (for example RCNN -> edge tpu model) to run it on rpi + coral?
Nope 😭 unfortunately, the Edge TPU only supports running SSD-MobileNet detection models for now. They might add RCNN support in the future. You should also keep an eye on EfficientDet, the newest state-of-the-art lightweight object detection model from Google.
Great tutorial I get this error. "ValueError: Failed to load delegate from libedgetpu.so.1.0"
Awesome! Works like a charm. Somehow on a RP4 with 4GB ram video tends to freeze and go, freeze and go,... while the fps keeps saying ~20... A problem that doesn't happen on the RP4 with 8GB ram...
that would the uh.... lack of ram talking to you.
Cool video. I plan on getting a coral due to this video. Any chance you could do a NCS2 vs Coral? I dont see many videos comparing the two.
Hi great video
Would I be able to add a nest outdoor camera as camera and use coral usb accelerator to add to security system as item detection ?
Hmm, I haven't used a Nest outdoor camera before so I'm not sure how they work. Do they stream the video feed over an IP? If so, it might work. You'd have to set up your Raspberry Pi to grab the stream from the Nest camera and then process it with TensorFlow. Here's the code that lets it work with a web stream: github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/TFLite_detection_stream.py
@@EdjeElectronics thank you for the reply, I'll give it a try and let you know how it works out.
Can I use it for image classification custom data set?
Hi sir, do you have a tutorial on how to add object in an existing model?
Unfortunately, you can't just add an object to existing models. You have to retrain the whole model from scratch with data from the old model plus the new data!
Okay thank you,
@@EdjeElectronics Would you happen to have a tutorial on how to make a custom object detection model from scratch, converting it to tflite so it can run on the Coral+Pi? Thanks, btw.. your videos are great and you're a good teacher.
@@EdjeElectronics if your model trained in windows 10. can it be use in rpi?
Looks like this no longer works for raspberian 11 bullseye. Is there a way to fix that? Looks like its an issue with opencv and it not finding the camera. I tried using the camera in legacy mode but no luck
Can you let me know what type of camera you're using?
for me its working perfectly at raspberian 11. only the Pi Camera didnt work i have to use a webcam
Hi Evan, i ran your card model generated from Windows 10 on Pi 4. Although able to detect, the FPS was extremely slow. Have you implemented it with Coral TPU? Although I am a 70 years old man, I have learned a lot from your previous videos. Do you have any new videos coming soon such as using Colab to train a model?
Should I get the RPI 4 with coral USB accelator, or the Coral Dev Board? (Use case: use the AI model to tell camera servo to track a person)
Hey man, what about tracking would it be in high FPS? And are you open to make projects for unmanned vehicles applications?
21fps with Coral vs 4fps on the PI alone!
Hello, is this the same with the Dev board from coral? I live in france and the usb accelerator cant be sent to where i live
I don't know much about the Dev Board, so I'm not sure. They have some good information on the website about what kind of projects you can do with it (see link). I didn't know they couldn't ship to some places in France 😞. Are there any EU websites you can purchase the USB Accelerator from? coral.ai/docs/dev-board/get-started/
@@EdjeElectronics Hum, i tried on a lot of EU websites and they all said that it wanst shippable except one but it said that they would send me an email if they can ship it to my country . If they doesnt, i guess i'll order the dev board and i'll use the coral docs, thanks
Can the RPi+Coral usb accelerator combination support regular tensorflow library rather than the lite version? I am using a python library that requires TensorFlow 2.1 and I don't know if using TF lite will work for the application.
Yes, if you are using a 2.X version of TensorFlow, it will have the compatible TensorFlow Lite libraries built in. My code automatically handles importing packages from the correct TensorFlow library regardless of which one you have installed.
@@EdjeElectronics Ah I see. Thank you very much!
Hey Edje, my raspberry
pi is having a little bit of trouble when I try to test the sample edgetpu. When ever I type in that last command, the message VIDIOC_QBUF: Invalid argument continuously pops up. Think u can help?
Does a window still appear with a live camera feed and detected objects drawn on each frame? Or does the program not run at all? Either way, it might be an issue with your webcam. Try borrowing a friend's webcam to see if the error still occurs!
Edje Electronics hey, so I switch from the pi camera into a usb camera and when I now try to test the sample edgetpu, it gets stuck on /home/pi/tflite1/Sample_TFLite_model/edgetpu.tflite.
really nice video. I hope u can make a video on how to apply this image processing to make autonomous car. can we make conditions for the autonomous car based on detected object?. if can, I really want to know how. :D
Did you do the traffic counter?
Hi, sorry am quite new here for the Tensorflow. Can I know is there a way to output the result of the object detection and their percentage into a text file? Thanks
thanks for putting this out!
He suck
When will we finally get to see how to make our own edge tpu model
Good question! That video has moved way to the backburner for me. I did just create a Google Colab guide that you can use to compile Edge TPU models. If you have a quantized TFLite model, it's as easy as uploading the .tflite file to Colab and running the compiler. Try it out! colab.research.google.com/drive/1o6cNNNgGhoT7_DR4jhpMKpq3mZZ6Of4N?usp=sharing
Ok
How do I get a quantized TFLite model. I followed your tutorials and have a detect.tflite file that runs. However, in the google colab it says not quantized. Is there a step I missed?
Edge TPU Compiler version 14.1.317412892
Invalid model: detect.tflite
Model not quantized
Please help!
Coral or movidius? Whick one is bang for buck?
Hi Edge, hope to have good health when saw it.
I was following your tutorial and while I connect my Coral USB, the LED on it does not light up????
And when i try to run command with --edgetpu, the "Failed to load delegate from libedgetpu.so.1.0" was appered.
my Coral USB has problem?
Probably just need to do something like this: sudo usermod -a -G plugdev $USER and reboot
Did you maybe try to test the TFLite model's performance using implementation with C++ API for TF instead of Python? It can be interesting if it would increase performance
I haven't tested that! You're right, it would be interesting to see how much the performance increases. Unfortunately, I just don't have time to try it out!
I haven't check the performance increase yet, but here's my c++ repo: github.com/Namburger/edgetpu-detection-camera
Hi, have you tried the intel neural compute stick 2?
sir is there any video on using intel ncs2 instead of the coral USB accelerator. Because coral USB accelerator is out of stock for over 4 months now
thank you so much
you save me
is it possible to use two google coral TPUs?
Can you add a second/third camera?
Multiple camara module
Hi Edge, I guess you used 32bit version os on Pi at your videos
it would be interesting to see how well faster_rcnn_resnet_101 runs on coral usb. The last time I tried to run that on pi 3B+, I wasn't able to convert the model to tflite. I may have to try it again, since I haven't tried in over a year.
Unfortunately, TFLite does not support Faster-RCNN models. It only supports SSD-MobileNet models. Maybe Google will update it to support heavier models some day!
@@EdjeElectronics bummer, I was hoping they would add support for it eventually. A couple of recent papers in 2019 were making gradual progress on neural net compression. I've been thinking or asking myself this question, "what if you remove the residual layers after the model is trained?" One of the primary benefits of resnet is reducing vanishing gradient during training. If we remove the residual layers, you probably wouldn't be able to retrain the model, but it might make it easier to convert to tflite.
hello, could you help me with this error? usage: TFLite_detection_video.py [-h] --modeldir MODELDIR [--graph GRAPH]
[--labels LABELS] [--threshold THRESHOLD]
[--video VIDEO]
TFLite_detection_video.py: error: unrecognized arguments: --edgetpu
is weird because the tensorflow lite is working
thanks
Hmm, I think you have an older version of my code that doesn't support the Edge TPU. From inside the tflite1 folder, try issuing "git pull github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi.git". That should update your local files with the newer files from my repository.
yes i was using the older version, thanks!!
Hi, is there any video for coral on windows ?
Would this help using an all sky camera?
Will it be possible to make the window size much bigger? Also if I were to run this on a 50" TV, would the fps slow down??
First off great video and tutorial @edjeelectronics, your method of explanation and documentation is really terrific. I am currently up and running with the TPu, but only getting ~18 FPS with nothing else running. This is a fresh install of Raspian Buster with nothing else installed and only went through the setup to get it up and running without the TPU first (which performed at ~5 FPS). Does anyone have any tips to get the FPS up to something closer to 30 FPS without going to Max? Thanks in advance.
I've just this minute fired up my brand new Accelerator on my Pi4 4GB and I'm getting 35FPS from a creative USB webcam (using STD, not Max)
@james wolf Thanks! Which model of Raspberry Pi are you using? I used the Pi 4 4GB model for this video, and I do think the extra RAM helps it run a bit faster.
@@EdjeElectronics I am also using the Pi 4 with 4GB, I tried to match what you did exactly
@@contractorwolf Weird! Do you know if you're running at a higher resolution (1920x1080 instead of 1280x720) maybe? Sometimes a webcam will automatically force a high resolution. Also, do you have the TPU plugged in to a USB 3.0 port?
@@EdjeElectronics I am using the normal PiCam and running the Coral from the USB-3 port (blue). Would the PiCam have a slower refresh rate or something?
Very cute!!
No module named "edgetpu"! Is this some pathway error? i have installed edgetpu and i can see the folder where its installed
Hi, which operating system must be installed on the raspberry pi4 8gb? I have the raspberry brand new
For this project, please use the official 32-bit Raspberry Pi OS. www.raspberrypi.org/software/
Sigh need to find this updated for raspberry pi 5 can't past the cv2 error.
What's the specific error?
can you do this with dialouge writing
can we only type "--edgetpu" behind any coding file after I completly install a usb coral
hi, there in this tutorial How To Train an Object Detection Classifier Using TensorFlow (GPU) on Windows 10 after termination of cmd what are the exact commands that we use. I am getting an error in first line of ipynb file i.e.
ImportError Traceback (most recent call last)
in
15 # This is needed since the notebook is stored in the object_detection folder.
16 sys.path.append("..")
---> 17 from object_detection.utils import ops as utils_ops
18
19 if StrictVersion(tf.__version__) < StrictVersion('1.9.0'):
C:\tensorflow1\models
esearch\object_detection\utils\ops.py in
26 from six.moves import zip
27 import tensorflow.compat.v1 as tf
---> 28 import tf_slim as slim
29 from object_detection.core import standard_fields as fields
30 from object_detection.utils import shape_utils
ImportError: No module named 'tf_slim'
How to solve this problem help plz
try the following:
pip install tf_slim
Tried this but keep getting a "Segmentation Fault"
error
Has the problem been solved?
Hello , I try to use Led for person detection . Install RPi.GPIO on my python3 but I get a 'ModuleNot FountError: No module named 'RPi' ' . After I try to install RPi.GPIO on virual env but again same error. Please can you help me for this problem.
when i run it with edgetpu mine is not accurate please help
where is the whole directory?
Hi I followed the instructions but why my fps only increase to 10
hey wentao look at my question (I have the same issue). I found that my RPi was failing on the SD Card Speed Test and that my affect the possible FPS when running it. I am interested to see if your SD card fails the same test as mine, that might be the issue? Let me know. The test is here: Menu > Accessories > RPI Diagnostics >SD Card Speed Test
@@contractorwolf could you find the performance difference after changing SD card ? cuz mine stucks at 10fps as well
@@Spreme91 did not seem to help me, passes all diagnostic tests now with the faster card but still can only achieve ~18fps with the cable that comes with the Coral, I made changes for my app to his original code that drops my performance down to like a 12fps (doing calculations to find the largest identified object and writing a bunch of data to a tiny TFT). Not the greatest performance, but good enough for my project. Let me know what you are doing or if you find any tweaks that help. @contractorwolf on twitter
Segmentation fault error
can you do this with a jetson nano :) 2gb
Everything worked perfect except bounding box is not directly on the objects.placement of rectangle box is away from the object detected.can any1 know how to solve it?
👏
Anyone experiment with the OAK D lite Camera with this?
I have a quick question, with the coral USB accelerator why am I only getting 10 fps. Note that before the coral accelerator I was running 0.9 fps. I have a raspberry pi 4 with 4 gb of ram. I do not know why it is running so poorly.
I had the same issue, and I doubled my frame rate by using a high-quality USB 3 cable. The cable that came with my Coral USB accelerator may have been damaged or defective.
Is it possible to use it on a windows machine, sir?
Yeah
Coral has a price drop right now FYI circa July 2020
Nice, thanks for the heads up!
i think can use a RTX2050 for this job it haves TENSOR CORES inside by CUDA CORES and WAY MORE CHEAP(gives 2x more performance than same priced tpu) AND WAY MORE EFECTIVE WAY(when nvidia drivers available to linux arm there is hardest part)
It seems logical, but isn't power consumption one of the most important things in such embedded systems?
Would you like to sell your USB Accelerator to me? I've looked everywhere and always run out
Hey can you please tell me is it possible to convert the detected object in to speech. That will help as i am trying to this as a final year project.
Why don’t you plug in the coral until after the libraries are installed? This isn’t Windows where drivers will be auto-installed.