God bless you king gonna implement this for my visual servoing application with Universal robot 3, i implemented most of your tutorials, any advice for implementing this one (how can i send extracted position to the robot arm via ethernet and using python and YOLO v8 OBB + ROS) ?
Hi Mohamed Assanhaji! Thanks for watching my video! If you would like to send information using ethernet, you can use socket communication. Regarding the ethernet communication, code for this tutorial will help you. ua-cam.com/video/40p2avodFV0/v-deo.html
How we know where is boltu top part, if we dont know it robot will make mistack when its placing can you please make a tutorial about it , find top part and robot will pick on part part of boltu
Hi, Can you please create a video using Computer Vision (implementing different models, like YOLOv8 etc.) via IsaacROS/ROS2 inside IsaacSim Environment? Thanks
On the serious note, after detecting a certain object (Exp: Door Handle), with a RGBD Camera, can you pintpoint the location of the handle like X,Y and Z? Maybe demostrate a video
Yes, it is possible. I actually have several videos describing this technique. Here are several of them. ua-cam.com/video/--81OoXMvlw/v-deo.html ua-cam.com/video/oKaLyow7hWU/v-deo.html
Hi Yang Liu! Thanks for watching my video! If you know real coordinates of object you are tracking, it easy to calculate velocity. But I don’t think that there is a module in deepstream to do that. Also, to do this, you have to use RGBD camera.
God bless you king gonna implement this for my visual servoing application with Universal robot 3, i implemented most of your tutorials, any advice for implementing this one (how can i send extracted position to the robot arm via ethernet and using python and YOLO v8 OBB + ROS) ?
Hi Mohamed Assanhaji!
Thanks for watching my video!
If you would like to send information using ethernet, you can use socket communication.
Regarding the ethernet communication, code for this tutorial will help you.
ua-cam.com/video/40p2avodFV0/v-deo.html
How we know where is boltu top part, if we dont know it robot will make mistack when its placing
can you please make a tutorial about it , find top part and robot will pick on part part of boltu
Hi, Can you please create a video using Computer Vision (implementing different models, like YOLOv8 etc.) via IsaacROS/ROS2 inside IsaacSim Environment?
Thanks
Thank you for the suggestion! I will consider it!
Awesome stuff! This is exactly what I need for my robot. Will it work on a Jetson Xavier using exactly the same steps?
Hi Peter Sobotta!
Thanks for watching my video!
Yes, it should work.
and may I ask why you did not use ROBOFLOW for annotating your data set, is using labelimg for annotation faster?
You may use whatever tool is convenient for you. I prefer using tools without any logins or via the Internet.
On the serious note, after detecting a certain object (Exp: Door Handle), with a RGBD Camera, can you pintpoint the location of the handle like X,Y and Z? Maybe demostrate a video
Yes, it is possible. I actually have several videos describing this technique. Here are several of them.
ua-cam.com/video/--81OoXMvlw/v-deo.html
ua-cam.com/video/oKaLyow7hWU/v-deo.html
Wow....tqtqtqttqtqtq
Is there a way to measure the speed of tracked moving object by using deepstream?
Hi Yang Liu!
Thanks for watching my video!
If you know real coordinates of object you are tracking, it easy to calculate velocity. But I don’t think that there is a module in deepstream to do that. Also, to do this, you have to use RGBD camera.
@@robotmania8896 Thanks
is there a way to generate synthetic data for YOLOV8OBB
Hi Francy Llamado!
Thanks for watching my video!
I have never done it, but you may use DALL·E to generate a dataset.
are....are you....THE CHOOSEN ONE!? I WILL SUBSCRIBE TO YOU, PLEASE SHOW MORE OF YOUR WISDOM
Hi Ng Kean!
Thanks for watching my video!
Yeah, I will try not to fall to the dark side...