Fantastic video! Straight forward and easy to implement. This saved our team after hours of frustration with Android Studio. Thanks for helping us have autonomous ready to go for league finals!
Hello, I would like to apologize since I'm not completly sure on what to change in the code for this. I belive it would be a change made to the pipline file since that controls the camera but again Im not to sure. The Easy OpenCV Github may have some info on what to do for this. Since I don't exactly know I would look on the FTC forums. Over there someone should be able to help with giving you an correct and accurate awnser on what to change. Again I'm sorry for not being able to help you very much in terms of what to change. FTC Forums: ftcforum.firstinspires.org/
Thanks for the video! It works well, but only in a specific location. Is there any configuration changes that are needed to be made in order for the camera to detect the object in the most orientations?
Hello, i am trying this for our team's autonomous program. We have also made the autonomous movement code. But when we initialise and start the program we get an exception error with: Attempt to read from field 'int org.openftc.apriltag.Apriltagdetection.idon a null object reference. We also tried just initialising the code, but when a tag is seen the robot doesn't move. Is there any way to solve this?
The github from the second link looks like it completely changed. I don't see the steps or the file anywhere at all in the github. How do I use OpenCV now?
Hi there! If I remember right you shouldn't need the Control hub File anymore for this to work. I made my own GitHub repository a while back that uses April Tags and OpenCV that shouldn't need the file to work. I suggest cloning the project to Android Studio and then trying to deploy it to the robot. You will find two files already created from the video in the project. Once deployed try using the "Test Camera Stream" button on the Driver Station (21:44) and then point an April Tag at your robots camera to see if it detects it. If that works then try modifying the Camera_Exa file in the project to fit your needs. Let me know if this helps! Link to GitHub Project: github.com/CoveWolf/April_Tags_Camera_Detection_Example
@@FriendOfTheFireI barely remember how I did anything with the April Tags. I don’t know where the “change log” is, but I kind of remember once I was there searching for a version of the file that was showen in the video. You are right when I go to the GitHub, everything has been updated very recently. I hope this helps, goodluck.
Hello, We use a Rev control hub which has 2 usb ports built into the device. This allows you to connect up to 2 cameras at once. If you are trying to connect a camera to a Rev expansion hub. Here is some documentation I found on how to go about it. As for what hub I would use I don’t know if there is a legal list or any rules but I would use a USB hub by Anker as that’s the one used in the documentation. Documentation: firsttechchallenge.blogspot.com/2018/12/external-cameras-for-bots-on-field.html?m=1 Anker USB Hub: www.amazon.com/Anker-Extended-MacBook-Surface-Notebook/dp/B07L32B9C2/ref=mp_s_a_1_1?crid=1RKBAJDHKJIQK&keywords=Anker+4-Port+USB+3.0+Ultra-Slim+Portable+Data+Hub+with+12W+Power+Adapter+for+MacBook%2C+Mac+Pro%2FMini%2C+iMac%2C+XPS%2C+Surface+Pro%2C+Notebook+PCs+and+More&qid=1669737007&sprefix=anker+4-port+usb+3.0+ultra-slim+portable+data+hub+with+12w+power+adapter+for+macbook%2C+mac+pro%2Fmini%2C+imac%2C+xps%2C+surface+pro%2C+notebook+pcs+and+more%2Caps%2C299&sr=8-1 Hope this helps!
According to the opencv GitHub there is a version for onbot Java but I never done it before. The video only shows you how to do it in android studio. So unfortunately you will have to find another video to show you how to do that.
how different would this be for teams using onbot java instead of android studio? i'm a rookie coach (with a rookie team) and we're looking into doing exactly this to read from the images on the cones. thx.
So when you using onbot Java it’s will faster to upload and easy to manage the code(the code that you choose when it’s start to run). But the android studio give you access to editing more than onbot Java also easy to read and have auto correct so it’s more easy to write the code.
Thanks for your help. you are saving my life form the dead line😂😂.
Fantastic video! Straight forward and easy to implement. This saved our team after hours of frustration with Android Studio. Thanks for helping us have autonomous ready to go for league finals!
You are a rock star! Thanks for sharing your expertise!!
Very Cool!
This helped me so much , thank you!
Great tutorial. What if I´m using the camera phone? because the example file uses a webcam. What do I change?
Hello,
I would like to apologize since I'm not completly sure on what to change in the code for this. I belive it would be a change made to the pipline file since that controls the camera but again Im not to sure. The Easy OpenCV Github may have some info on what to do for this. Since I don't exactly know I would look on the FTC forums. Over there someone should be able to help with giving you an correct and accurate awnser on what to change. Again I'm sorry for not being able to help you very much in terms of what to change.
FTC Forums: ftcforum.firstinspires.org/
Hi, question:
Does this work with Roadrunner?
Are all the gradle dependencies between this and RR the same?
Thanks!
Thanks for the video! It works well, but only in a specific location. Is there any configuration changes that are needed to be made in order for the camera to detect the object in the most orientations?
Is there a fork or anything to use the april tag plugin with onbot java?
Hello, i am trying this for our team's autonomous program. We have also made the autonomous movement code. But when we initialise and start the program we get an exception error with: Attempt to read from field 'int org.openftc.apriltag.Apriltagdetection.idon a null object reference. We also tried just initialising the code, but when a tag is seen the robot doesn't move. Is there any way to solve this?
i’m having a similar issue, id love to know how to solve it
It keeps on saying that I don't have any disk space though I have checked through it several times, is there anything I could do to solve it?
Seeing this video, is it possible to do something like this, where the camera is able to scan barcodes?
Hi sorry for the late reply! Check your email, I just replied
The github from the second link looks like it completely changed. I don't see the steps or the file anywhere at all in the github. How do I use OpenCV now?
Hi there! If I remember right you shouldn't need the Control hub File anymore for this to work. I made my own GitHub repository a while back that uses April Tags and OpenCV that shouldn't need the file to work. I suggest cloning the project to Android Studio and then trying to deploy it to the robot. You will find two files already created from the video in the project. Once deployed try using the "Test Camera Stream" button on the Driver Station (21:44) and then point an April Tag at your robots camera to see if it detects it. If that works then try modifying the Camera_Exa file in the project to fit your needs. Let me know if this helps!
Link to GitHub Project: github.com/CoveWolf/April_Tags_Camera_Detection_Example
@@CoveWolf Ok, thanks! I'll try it out
@@CoveWolf Thanks a lot! The code worked great!
6:16 I don't see that file in the github, and I only see 5 steps but it is 7 on your computer.
Edit: I found the file in the change log.
Where did you find it? The github looks like it completely changed and I can't find it anywhere.
@@FriendOfTheFireI barely remember how I did anything with the April Tags. I don’t know where the “change log” is, but I kind of remember once I was there searching for a version of the file that was showen in the video. You are right when I go to the GitHub, everything has been updated very recently. I hope this helps, goodluck.
How are you able to connect the external camera to REV extension hug? what USB Hub you are using?
Hello,
We use a Rev control hub which has 2 usb ports built into the device. This allows you to connect up to 2 cameras at once. If you are trying to connect a camera to a Rev expansion hub. Here is some documentation I found on how to go about it. As for what hub I would use I don’t know if there is a legal list or any rules but I would use a USB hub by Anker as that’s the one used in the documentation.
Documentation: firsttechchallenge.blogspot.com/2018/12/external-cameras-for-bots-on-field.html?m=1
Anker USB Hub: www.amazon.com/Anker-Extended-MacBook-Surface-Notebook/dp/B07L32B9C2/ref=mp_s_a_1_1?crid=1RKBAJDHKJIQK&keywords=Anker+4-Port+USB+3.0+Ultra-Slim+Portable+Data+Hub+with+12W+Power+Adapter+for+MacBook%2C+Mac+Pro%2FMini%2C+iMac%2C+XPS%2C+Surface+Pro%2C+Notebook+PCs+and+More&qid=1669737007&sprefix=anker+4-port+usb+3.0+ultra-slim+portable+data+hub+with+12w+power+adapter+for+macbook%2C+mac+pro%2Fmini%2C+imac%2C+xps%2C+surface+pro%2C+notebook+pcs+and+more%2Caps%2C299&sr=8-1
Hope this helps!
will this work if I am coding this with onbot java instead of android studio
According to the opencv GitHub there is a version for onbot Java but I never done it before. The video only shows you how to do it in android studio. So unfortunately you will have to find another video to show you how to do that.
how different would this be for teams using onbot java instead of android studio? i'm a rookie coach (with a rookie team) and we're looking into doing exactly this to read from the images on the cones. thx.
So when you using onbot Java it’s will faster to upload and easy to manage the code(the code that you choose when it’s start to run). But the android studio give you access to editing more than onbot Java also easy to read and have auto correct so it’s more easy to write the code.
Is there anyone that can show me how to do this on blocks?
I don't believe it is possible.