How to Use YOLOv8 with ROS2

Поділитися
Вставка

КОМЕНТАРІ • 160

  • @user-bj8ni6dm8c
    @user-bj8ni6dm8c 6 місяців тому

    It is one of the best tutorials about Ros with yolo that I see in Internet, I am a student researcher who do project about robotic vision detection and your video really helped me a ton. Thank you for your contribution!

    • @robotmania8896
      @robotmania8896  6 місяців тому +1

      Hi Jiangjing!
      Thanks for watching my video!
      It is my pleasure if this video has helped you!

  • @ericavram5646
    @ericavram5646 Рік тому

    Very good tutorial and nicely explained! I am working on a robot that should recognize a ball using ML and depth vision and this has been of great help. Thank you!

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi Eric Avram!
      Thanks for watching my video!
      It is my pleasure if this video has helped you!

  • @checksumff1248
    @checksumff1248 Рік тому

    Thank you. I've found this very useful. I appreciate you're effort!

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi checksumff1248!
      Thanks for watching my video!
      It is my pleasure is this video has helped you!

  • @vilsonwenisbelle9041
    @vilsonwenisbelle9041 3 місяці тому

    Thanks a lot for your video, actually for all your videos, they are really helping me with my projects

    • @robotmania8896
      @robotmania8896  3 місяці тому +1

      Hi Vilson Wenis Belle!
      Thanks for watching my videos!
      It is my pleasure if these videos have helped you!

  • @shailigupta4086
    @shailigupta4086 11 місяців тому

    Love you professor. This video will make my future. 🥰

    • @robotmania8896
      @robotmania8896  11 місяців тому

      Hi shailigupta4086!
      Thanks for watching my video!
      It is my pleasure if this video has helped you!

  • @vivienchambost4415
    @vivienchambost4415 5 місяців тому

    Hey, it helped me a lot on a project I am doing. Easy to implement on any project, thanks a lot!

    • @robotmania8896
      @robotmania8896  5 місяців тому

      Hi Vivien Chambost!
      Thanks for watching my video!
      It is my pleasure is this program has helped you!

  • @newtonkariuki3104
    @newtonkariuki3104 8 місяців тому

    This was very helpful, thank you !

    • @robotmania8896
      @robotmania8896  8 місяців тому

      Hi Newton Kariuki!
      Thanks for watching my video!
      It is my pleasure if this video has helped you.

  • @sharingmylittleinsight
    @sharingmylittleinsight 2 місяці тому

    thank you very much brother, it's really help me to finish my study project.

    • @robotmania8896
      @robotmania8896  2 місяці тому

      Hi Sharing My Little Insight!
      It is my pleasure it this video has helped you!

  • @kevinkipkorir3132
    @kevinkipkorir3132 8 місяців тому

    Thank you, you saved our project😀😀

    • @robotmania8896
      @robotmania8896  8 місяців тому

      Hi Kevin Kipkorir!
      Thanks for watching my video!
      It is my pleasure if this video has helped you.

  • @user-vf9xs6ou1u
    @user-vf9xs6ou1u 6 місяців тому

    very nice, thanks!!

    • @robotmania8896
      @robotmania8896  6 місяців тому

      Hi Roger Weerd!
      It is my pleasure if this video has helped you!

  • @alperenkeser
    @alperenkeser 7 місяців тому

    Great tutorial. Got a question though, is it possible to do "object pose estimation" with just RGB data? If possible, do you think using Point Cloud (just depth data) instead of RGB would make it better for pose estimation?
    My case is detecting a pallet and it's pose, with a Kinect v1 camera.

    • @robotmania8896
      @robotmania8896  7 місяців тому +1

      Hi Alperen Keser!
      Thanks for watching my video!
      To do object estimation, first of all, you have to recognize the object. So, I think it is not possible to do pose estimation with just depth data because you will not be able to recognize the object. Also, you should have depth data to calculate coordinates of the object, but if your objects are all on the same plane and you know the distance from the camera to that plane, you will be able to calculate the coordinates.

  • @dafech_911
    @dafech_911 2 місяці тому

    Hi. Thank you for your video. It's exactly what I've been looking for. However, I have a question. I have my RGB camera that would be working with the YOLOv8 custom model that I'm currently training, but I also have a depth TOF which can publish a PointCloud depth map. What reference do the bouding boxes have? I need this information to match at least the coordinates of the corners with the depth map. Do you think that's possible?

    • @robotmania8896
      @robotmania8896  2 місяці тому +1

      Yes, I think it is possible to align frames from RGB camera and depth camera. But it will involve some relatively complex mathematical operations. If you would like to use RGB and depth camera simultaneously, I recommend you using RealSense or ZED camera. Probably it will save you a lot of time.

    • @dafech_911
      @dafech_911 2 місяці тому

      @@robotmania8896 thank you so much. I will be indeed using a RealSense. I will look into it to see if there are already some algorithms to do that. Wish you success :)

  • @cmtro
    @cmtro Рік тому

    Very good....

  • @nhattran4833
    @nhattran4833 11 місяців тому

    great tutorial, thanks for your sharing. Could make more video about using semantic segmentation in ROS2?

    • @robotmania8896
      @robotmania8896  11 місяців тому +1

      Hi nhattran4833!
      Thanks for watching my video!
      I have actually created a video about semantic segmentation and ROS2. Here is the link. I hope it will help you.
      ua-cam.com/video/Z5czzGeRJ4o/v-deo.html

    • @nhattran4833
      @nhattran4833 11 місяців тому

      @@robotmania8896 thanks, i really apply semantic segmention in mobile robot, could you recommend som applications which apply it in to mobile robot?

    • @robotmania8896
      @robotmania8896  10 місяців тому

      I think that semantic segmentation is more often used in conjunction with other methods rather than by itself. For example, it is used for control of mobile robots, like described in this paper.
      www.sciencedirect.com/science/article/abs/pii/S0957417421015189

  • @dafech_911
    @dafech_911 Місяць тому

    Hi, again. I commented your code a while back, but now I have another question. If you were to subscribe to multiple cameras at the same time, let's say one in the front, one in the right, one in the left and one in the back, you would need to use the threading library in your first code too? Thank you :)

    • @robotmania8896
      @robotmania8896  Місяць тому

      Hi Daniel Felipe Cruz Hernández!
      In that case you have to define subscribers for each camera and run each of the subscribers in a different thread. In this tutorial, I am implementing this method.
      ua-cam.com/video/Z5czzGeRJ4o/v-deo.html
      Please refer to the “robot_control_ss.py” script lines 203~205.

  • @najibmurshed
    @najibmurshed 3 місяці тому

    Thanks a lot for the video. I had some questions. I have a ros1 melodic environment and have a custom yolov9 model that detects specific objects. Can I still use your code and just replace my model .pt file instead of yours? If you have any suggestions please let me know.

    • @robotmania8896
      @robotmania8896  3 місяці тому

      Hi Najib Murshed!
      Thanks for watching my video!
      No, since yolov8 and yolov9 models are different you cannot use yolov9 pt file for yolov8.
      Since this code is made for ROS2, you cannot use it directly with ROS1. But the inference part should be the same. So, you have to change declaration of subscribers and publisher.

  • @DennisJhonMorenoOrtega
    @DennisJhonMorenoOrtega Рік тому

    Hey, great video and very straight forward to compile. However, are you planning on posting some videos using the yolobot_control pkg as well ? I don't have a joystick to use the joy node, but I did use the teleop_twist_keyboard pkg to move the vehicle around the world but the commands are swap, if I press "I" to move forward, the vehicle will move backwards and so on with the other commands. Any thoughts ? thanks !

    • @robotmania8896
      @robotmania8896  Рік тому +1

      Hi Dennis Jhon Moreno Ortega!
      Thanks for watching my video!
      If I understand correctly, you are publishing “/yolobot/cmd_vel” using keyboard. I think you can fix your issue by altering joint axis direction. In the “yolobot.urdf” file, at lines 219 and 246 change from
      to
      Do not forget to execute “colcon build” after you correct the file.
      I hope this will help you.

    • @seethasubramanyan213
      @seethasubramanyan213 7 місяців тому

      @@robotmania8896 Sir I am using my own urdf and the error showing like [differential_drive_controller]: Joint [left_wheel_base_joint] not found, plugin will not work

    • @seethasubramanyan213
      @seethasubramanyan213 7 місяців тому

      Could you please explain the Urdf used in this video

    • @robotmania8896
      @robotmania8896  7 місяців тому

      @@seethasubramanyan213 This error means that there is no joint named “left_wheel_base_joint” in your URDF. Please rename the joint which is connecting the body and the left wheel of your robot.

  • @pablogomez9401
    @pablogomez9401 Рік тому +1

    Hey, excellent tutorial and very well explained. But I have one issue when I try to use my own pretrained model. I paste my 'best.pt' file on the yolobot_recognition/scripts folder, then in the python script 'yolov8_ros2_pt.py' I write the name of my pretrained model. When executed prints an error saying that There is no file or directory called 'best.pt'. Any idea where the error is?

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi Pablo Gomez!
      Thanks for watching my video!
      Please put your ‘best.pt’ file to the home directory (/home/”user name”) or specify the absolute path in the ‘yolov8_ros2_pt.py’ script.

    • @pablogomez9401
      @pablogomez9401 Рік тому

      @@robotmania8896 Thanks, worked like a charm!

  • @towerboi-zg3it
    @towerboi-zg3it 8 місяців тому

    I dont know why when I run the code that you shown "Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic" This error come and the node is duplicated

    • @robotmania8896
      @robotmania8896  8 місяців тому

      Hi Towerboi!
      Thanks for watching my video!
      Does this error have negative effect on your simulation? If not, just leave it as it is. Since it might be ROS bug.

  • @howardkanginan
    @howardkanginan 8 місяців тому

    Hi!
    I have an error after I source the yolobot setup.bash. It says when I run the ros2 launch yolobot_gazebo yolobot_launch.py command, it says gazebo_ros not found. I've install the ros2 iron gazebo package as well. How can I fix this?

    • @robotmania8896
      @robotmania8896  8 місяців тому

      Hi Howard Kang!
      Thanks for watching my video!
      If the error says “gazebo_ros not found”, please install “ros-iron-gazebo-ros” package. Also note that this project was made with ROS Foxy, so it may not work with ROS Iron.

  • @mdmahedihassan2444
    @mdmahedihassan2444 Рік тому

    hello one problem, when I run ros2 topic list
    yolobot inference is not there
    how could I solve it?

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi mdmahedihassan2444!
      Thanks for watching my video!
      As I have explained in the video from 11:15, please run the “source” command before executing the “ros2 topic list” command.

  • @alocatnaf
    @alocatnaf 10 місяців тому

    Hi,
    great tutorial! im wondering where i can specify the yolo inference parameters, like imgsz, conf, max_det ?
    thanks in advance :)

    • @robotmania8896
      @robotmania8896  10 місяців тому

      Hi alocatnaf!
      Thanks for watching my video!
      In this code, you should do post-processing yourself. For example, if you want to show only objects with confidence above some value, you should extract confidence parameter from results (yolov8_ros2_pt.py line 41) and apply an “if” statement when plotting inference results.

    • @alocatnaf
      @alocatnaf 10 місяців тому

      @@robotmania8896 thank you very much!

  • @sharke0062
    @sharke0062 3 місяці тому

    Hello! Thank you for this video about implementing YOLOv8 with gazebo and ros2. I have a question though. I have trained a YOLOv8 model on a custom dataset and have the best.pt file from the training. How do I then load this best.pt file? I tried replacing the path in the yolobot_recognition scripts to the path with the best.pt file but I keep getting the error "No such file or directory". I'm not sure whether the path I wrote is wrong or some other issue. Any suggestions are appreciated and thank you again!

    • @robotmania8896
      @robotmania8896  3 місяці тому +1

      Hi Sharke00!
      Thanks for watching my video!
      I think this happens because ROS is searching for a weight file in a wrong directory. I will fix it later, but as a quick fix, in “yolov8_ros2_pt.py” modify line 19 as
      self.model = YOLO('best.pt')
      and place the “best.py” file in the home directory. It should work.

    • @sharke0062
      @sharke0062 3 місяці тому

      @@robotmania8896 Yes, apparently the program made a new directory and once I placed the pt file there it started working. Another question I have is if I wanted to use the recognition package with other projects that use different robot models, what else do I need to do besides including the package in the main launch file? The console seems to just stop responding and no output (number, type of object detected) or error is given. Thank you for responding!

    • @robotmania8896
      @robotmania8896  3 місяці тому +1

      @@sharke0062 I don’t think that you have to do something special except for checking whether camera on your robot publishes “rgb_cam/image_raw” topic. Sometimes Gazebo may take a long time to launch especially if gazebo world contains a lot of objects, so maybe you just have to wait.

    • @sharke0062
      @sharke0062 3 місяці тому

      @@robotmania8896 I see. Thank you so much!

  • @anshbhatia4805
    @anshbhatia4805 Рік тому

    Great video! Can you please guide me how can I integrate this YOLO v8 with ROS2 code to camera for real time object detection?

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi anshbhatia4805!
      Thanks for watching my video!
      What do you mean by “integrate”? In this tutorial I have already explained how to use yolov8 with camera.

  • @carnivorah8837
    @carnivorah8837 Рік тому +2

    Hello! I tried installing the project on Ubuntu 22.04 and ROS-Humble and everything went okay until I got to simulation, where everything launches correctly but there are no messages being published on the topic and no camera feed appears in RVIZ. Any solutions? Thanks!

    • @DennisJhonMorenoOrtega
      @DennisJhonMorenoOrtega Рік тому

      Did you type in the terminal sudo apt update and sudo apt upgrade after installing the pkgs ? I'm using the same distribution as yours and I was able to see the messages after running those commands !

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi Carni Vorah!
      Thanks for watching my video!
      It is difficult to say only from the information you gave me. Are there any other errors in the terminal?
      Note that if you are, for example, using “ros2 topic echo” command to check topic content, you should execute “source” command before.

    • @user-og6un8qz8n
      @user-og6un8qz8n Рік тому +1

      @@robotmania8896 I did everything several times, tried it on a virtual machine, and grew up on different ubuntu, but the problem remained, the camera is empty.

    • @user-og6un8qz8n
      @user-og6un8qz8n Рік тому

      I did it almost a month ago. it was like something like
      [[[[WINPACK.cpp:64] Could not initialize NPACK! Reason: Unsupported hardware.
      YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients

    • @christopherbousquet-jette4301
      @christopherbousquet-jette4301 17 днів тому

      same issues

  • @jungahkwak6343
    @jungahkwak6343 Рік тому

    Thank you for your REALLY NICE VIDEO!
    Im trying the video, but I have some issues during 'colcon build'.
    [error] ModuleNotFoundError: No module named 'catkin_pkg'
    I tried to solve the error
    1. pip install catkin_pkg
    2. source /opt/ros/foxy/setup.bash
    3. added “source /opt/ros/foxy/setup.bash” in “.bashrc” file
    but it's not working.
    Any idea about this error? Thank you!

    • @robotmania8896
      @robotmania8896  Рік тому

      You don’t need catkin with ros2. ROS2 packages are built using ament_cmake. Have you tried adding “source /opt/ros/foxy/setup.bash” in “.bashrc” file and rebooting?

  • @y8fj
    @y8fj 6 місяців тому

    Hi once again! I am using in my work your code to compare performance of the raw yolo8n model with one accelerated with the DeepStream. Do i need to cite you or someone else?

    • @robotmania8896
      @robotmania8896  6 місяців тому

      Hi Dgh!
      Since I am providing only the zip file, I think it is difficult to cite. So, I think citing is not necessary.

    • @y8fj
      @y8fj 6 місяців тому

      @@robotmania8896 ok, clear then. Thanks for your code!

  • @nhatnet479
    @nhatnet479 5 днів тому

    thanks for this project, could you make a same tutorial using yolov8 and use the tensorRT

    • @robotmania8896
      @robotmania8896  4 дні тому

      Hi Nhat Net!
      Thanks for watching my video!
      I am currently not planning to make a tutorial about yolov8 and tensorRT but I have several videos related to it. Where exactly are you experiencing a problem?
      ua-cam.com/video/xqroBkpf3lY/v-deo.html
      ua-cam.com/video/aWDFtBPN2HM/v-deo.html

    • @nhatnet479
      @nhatnet479 4 дні тому

      @@robotmania8896 have this tutorial directly inferenced with gpu or cpu of jetson nano ?

    • @robotmania8896
      @robotmania8896  3 дні тому

      In the video I am using CPU for inference, but GPU can also be used.

  • @aishRobotics
    @aishRobotics Місяць тому

    Hey can i use the same code for jetson nano ROS 2 in real-time USB camera?

    • @robotmania8896
      @robotmania8896  Місяць тому

      Hi AishRobotics!
      Thanks for watching my video!
      Yes, you can. Just make sure that your USB camera publishes “rgb_cam/image_raw” topic.

  • @hammadsafeer4283
    @hammadsafeer4283 8 місяців тому

    hi, thanks for video!
    I trained the yolov8 model on colab to detect the traffic cones.
    I have zed2i stereo camera, i want to integrate the yolov8 with with ros2

    • @robotmania8896
      @robotmania8896  8 місяців тому

      Hi Hammad Safeer!
      Thanks for watching my video!
      Do I understand correctly that you want to do inference using yolov8 with ZED and ROS2?

    • @hammadsafeer4283
      @hammadsafeer4283 8 місяців тому

      exactly!@@robotmania8896

  • @architlahiri3110
    @architlahiri3110 Рік тому +1

    Hey, if u dont mind can you help me out? Im facing some issues.
    I downloaded all the code from the link in the video description, followed all the steps, but did not get any output- as in there was no message output, just blank when i ran "ros2 topic echo /Yolov8_Inference". However all the models and the robot are loaded into gazebo just fine. I compared the topics published in my run versus the video, and i am missing
    /rgb_cam/camera_info
    /rgb_cam/image_raw/compressed
    /rgb_cam/image_raw/compressedDepth
    /yolobot/odom
    Using rviz2, the topic image_raw can be found under Yolov8_Inference, but when I move to add it, there is no image window and it shows "no image".
    My ubuntu version is 20.04.6, using latest foxy distibution.

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi architlahiri3110!
      Thanks for watching my video!
      I personally have never faced such issue. Considering that you are missing camera related topics, maybe you don’t have gazebo plugins. Please refer to this page. Maybe “sudo apt-get install ros-${ROS_DISTRO}-ros-gz” command will solve your problem.
      gazebosim.org/docs/latest/ros_installation

    • @ChristianDiaryUG
      @ChristianDiaryUG Рік тому

      I am at this stage. I don't know whether you have moved beyond this. Those are the exact topics i am missing

    • @bedirhanselimyesilyurt998
      @bedirhanselimyesilyurt998 11 місяців тому

      me too :( .I tried that but nothing changed.

    • @architlahiri3110
      @architlahiri3110 11 місяців тому +2

      I'm replying here for everyone in the thread:
      sudo apt install ros-foxy-gazebo-ros-pkgs
      used this to fix it
      Good luck :)

    • @bedirhanselimyesilyurt998
      @bedirhanselimyesilyurt998 11 місяців тому +1

      i am working on humble and I changed command but also this command did not work.

  • @kennetheladistu3356
    @kennetheladistu3356 29 днів тому

    can i use this tutorial to integrate yolov8obb on ros2 using humble?

    • @robotmania8896
      @robotmania8896  28 днів тому

      Hi Kenneth Eladistu!
      Thanks for watching my video!
      Yes, the way of integration should be pretty much the same.

  • @user-kr8gd7nn2t
    @user-kr8gd7nn2t 5 місяців тому

    Hi this video really help me to do my project but i did not getting the images when i saw on rviz2 could you help me out for that also when i am trying todo install ros foxy i got error no such file or directory?

    • @robotmania8896
      @robotmania8896  5 місяців тому

      Hi Izaq!
      Thanks for watching my video!
      Do you have any error messages in the terminal?

    • @user-kr8gd7nn2t
      @user-kr8gd7nn2t 5 місяців тому

      @@robotmania8896 yeah it say no such file or directory i have gazebo 11.10.2 and humble package using and also how do i speed up the gazebo simulation to detect things and which camera are we using and did you first train yolov8 ? also after colcon build they ask me to conncect joystick what is this and when i run ros2 topic echo / yolov8_inference i did not get back any parameters
      if you don't mind can i have you email address to ask more questions. i have to submit this project end of this month please

    • @robotmania8896
      @robotmania8896  5 місяців тому

      Have you built the project successfully? To speed up the inference, you should use computer with GPU. Since it is a simulation, camera parameters are defined in the SDF file. No, in this tutorial I haven’t trained YOLO. I just used a provided model. To operate the robot, you should use joypad. You should use source command before doing “ros2 topic echo /yolov8_inference”. Here is my e-mail: robotmania8867@yahoo.com

  • @nourishcherish9593
    @nourishcherish9593 Місяць тому

    is this project written by you. is there a github repo. i am not allowed to access gdrives at my work

    • @robotmania8896
      @robotmania8896  Місяць тому

      Hi NOURISH CHERISH!
      Thanks for watching my video!
      Yes, this work is written by me. There is no github repo. You may download the zip file from your home and send it to your working place by email.

  • @Moon-ue8qb
    @Moon-ue8qb 6 місяців тому

    Hi. thank you for making video!! :)
    But I have something problem.
    I did < ros2 topic echo /Yolov8_Inference > then i got
    WARNING: topic [/Yolov8_Inference] does not appear to be published yet
    Could not determine the type for the passed topic
    How can i fix this error?
    sudo apt-get install ros-humble-ros-ign-bridge
    sudo apt-get install ros-humble-ros-pkgs
    sudo apt-get install ros-${ROS_DISTRO}-ros-gz"
    I tried them but still have error. Plz help me.

    • @robotmania8896
      @robotmania8896  6 місяців тому

      Hi Moon!
      Thanks for watching my video!
      Did you execute the “source” command before executing the “ros2 topic echo” command? Otherwise, you will get an error.

    • @user-kr8gd7nn2t
      @user-kr8gd7nn2t 5 місяців тому

      i did not get any error but nothing happened after run this command

  • @manishnayak9759
    @manishnayak9759 9 місяців тому

    How to build the yolov8 for ros noetic ?

    • @robotmania8896
      @robotmania8896  9 місяців тому

      Hi Manish Nayak!
      Thanks for watching my video!
      Since there is python implementation for yolov8, you don’t have to build it. Just install the required libraries for yolov8 using pip.

  • @akentertainment7911
    @akentertainment7911 7 днів тому

    hey will it work on gazebo11 too?

    • @robotmania8896
      @robotmania8896  6 днів тому

      Hi AK Entertainment!
      Thanks for watching my video!
      Yes, it should work on gazebo 11.

  • @lordfarquad-by1dq
    @lordfarquad-by1dq 11 місяців тому

    wat about yolo8 seg?

    • @robotmania8896
      @robotmania8896  11 місяців тому

      Hi lordfarquad-by1dq!
      Thanks for watching my video!
      The segmentation result format is slightly different from recognition, but you should be able to publish it with little change to the code. I am planning to release a new video within a few days regarding semantic segmentation and yolo, it may also help you.

  • @BadBrother
    @BadBrother 4 місяці тому

    Can YOLOv8 track a simple track using ros?

    • @robotmania8896
      @robotmania8896  4 місяці тому

      Hi Bad Brother!
      Thanks for watching my video!
      What do you mean by “simple track”? If it is something like a road, you can use semantic segmentation to extract the road part from an image.

    • @BadBrother
      @BadBrother 4 місяці тому

      @@robotmania8896 I mean lane tracking. So it detects the trajectory of the black tape. oh okay, thank you for your advice. I would like to ask you a question. Actually I have never tried YOLO. If I want to start learning, where should I start, so that I can do lane traking using YOLO? Thanks

    • @robotmania8896
      @robotmania8896  4 місяці тому

      @@BadBrother If you need to detect black tape, you don’t have to do inference using YOLO. You may just use HSV decomposition to detect black color. Here is an example video. ua-cam.com/video/hdnuykRwMmI/v-deo.html

    • @BadBrother
      @BadBrother 4 місяці тому

      @@robotmania8896I really appreciate your response. I apologise for confusing you. I mean so the robot I want to build uses a Camera to detect the 2 black tape on the left and right and the path forms a trajectory. So the camera is connected to the jetson nano and then the motor is driven by the arduino uno.

    • @robotmania8896
      @robotmania8896  4 місяці тому +1

      I understand your problem. If you know exactly the object and color you have to detect, I think you don’t necessarily have to use yolo. You may do color detection or use infrared sensor to move along the line.

  • @user-no8zu1qw2h
    @user-no8zu1qw2h 5 місяців тому

    how can i use this code with yolov5?

    • @robotmania8896
      @robotmania8896  5 місяців тому

      Hi ថាន្នី សុគុណ!
      If you would like to use YoloV5, please refer to this video.
      ua-cam.com/video/594Gmkdo-_s/v-deo.html

    • @user-no8zu1qw2h
      @user-no8zu1qw2h 5 місяців тому

      @robotmania8896 I have watched it, but when I try to run it as ros2 run ... it doesn't know module 'models' and 'utils'

    • @robotmania8896
      @robotmania8896  5 місяців тому

      Yes, to run that code using “ros2 run” you have to modify the CMake file. You can run that code using “python3”.

  • @nourishcherish9593
    @nourishcherish9593 Місяць тому

    i see few syntax mistakes in your code just by looking at it.

    • @robotmania8896
      @robotmania8896  Місяць тому

      Year, there could be syntax mistakes. Please let me know if you have found any.

    • @nourishcherish9593
      @nourishcherish9593 Місяць тому

      Actually its running without errors. Wtf. Like you just have a line that says self.subscriprion... .hows that not causing an error. Do i not know python

  • @nikhill3102
    @nikhill3102 2 місяці тому

    sudo apt install gazebo9 error

    • @robotmania8896
      @robotmania8896  2 місяці тому

      Hi Nikhil Kulkarni!
      Thanks for watching my video!
      On which version of Ubuntu are you trying to install gazebo?

    • @nikhill3102
      @nikhill3102 2 місяці тому

      @@robotmania8896 22.04

    • @robotmania8896
      @robotmania8896  2 місяці тому

      For 22.04, "sudo apt install gazebo" should work.

    • @nikhill3102
      @nikhill3102 2 місяці тому

      @@robotmania8896 gazebo no candidate error

    • @nikhill3102
      @nikhill3102 2 місяці тому

      @@robotmania8896 tried but showing error no 'gazebo' Candidate

  • @adityajambhale915
    @adityajambhale915 9 місяців тому +1

    Hello, i am facing issues while installing ultralytics (it's build dependencies are not satisfied), i am using Ubuntu 20.04 and i tried with python 3.8.10 and python 3.7.5 it is still giving error please suggest what to do, i am not able to find solution anywhere else🥲

    • @aadityanair2488
      @aadityanair2488 9 місяців тому +1

      Nice Issues

    • @robotmania8896
      @robotmania8896  9 місяців тому

      Hi Aditya Jambhale!
      Thanks for watching my video!
      What error exactly do you have?

  • @user-xg7dk1wl3s
    @user-xg7dk1wl3s Рік тому

    Hello sir, thank you for the video, i am learning alot from you. I tried to implement the project step by step, but firstly the gazebo folder didnt appear on my home directory after unhiding all content of the home, I searched it manually and found a gazebo-9 folder but the folder contents where not similar though it had a model folder still. Secondly on colcon build i get this error
    CMake Error at CMakeLists.txt:19 (find_package):
    By not providing "Findament_cmake.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "ament_cmake", but CMake did not find one.
    Could not find a package configuration file provided by "amend_cmake" with any of the following names:
    ament_cmakeConfig.cmake
    ament_cmake.config.cmake
    Add the installation prefix of "ament_cmake" to CMAKE_PREFIX_PATH or set "ament_cmake_DIR" to a directory containing one of the above files. If "ament_cmake" provides a separate development package or SDK, be sure it has been installed.

    • @user-xg7dk1wl3s
      @user-xg7dk1wl3s Рік тому

      I have tried to add the source /opt/ros/foxy/setup.bash at the bottom of the file but still it doesn't build the packages

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi @user-xg7dk1wl3s!
      Thanks for watching my video!
      Please open the terminal and execute the “gazebo” command. The folder should appear.
      Also, add “source /opt/ros/foxy/setup.bash” to your “.bashrc” file. This should solve ament_cmake related error. Don’t forget to reboot your computer after altering the “.bashrc” file.

  • @raphaelcrespopereira3206
    @raphaelcrespopereira3206 Рік тому

    cant make the yolobot_inference folder to be listed.
    keeps showing this error
    ModuleNotFoundError: No module named 'yolov8_msgs.yolov8_msgs_s__rosidl_typesupport_c'

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi raphaelcrespopereira3206!
      Thanks for watching my video!
      Hmm… I have never had such an error. Which version of ROS are you using?

    • @raphaelcrespopereira3206
      @raphaelcrespopereira3206 Рік тому

      @@robotmania8896 ros2 foxy on ubuntu 20.04

    • @raphaelcrespopereira3206
      @raphaelcrespopereira3206 Рік тому

      @@robotmania8896 I reinstalled everything and now the inference folder is listed but when i do the echo part does not show the camera working and when i open rviz the image of inference show no image

    • @ChristianDiaryUG
      @ChristianDiaryUG Рік тому

      I am at this stage. I don't know whether you have moved beyond this. Those are the exact topics i am missing

    • @raphaelcrespopereira3206
      @raphaelcrespopereira3206 Рік тому +1

      @@ChristianDiaryUG i fixed running gazebo 11 with ros2 humble on ubuntu 22.04 and added the aditional step of installing the pckg of communication between ros2 and gazebo sudo apt-get install ros-humble-ros-ign-bridge

  • @oymdental2602
    @oymdental2602 Рік тому

    are you taking any clases, im looking to contact you

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi OYM dental!
      Thanks for watching my video!
      No, I am not providing any classes, since I have another job which takes almost all my time. But if you have any questions, maybe I can answer them.

  • @loowaijun2960
    @loowaijun2960 Рік тому

    Hi it is very good tutorial. But, I am facing this error during the command "colcon build"
    CMake Error at CMakeLists.txt:19 (find_package):
    By not providing "Findament_cmake.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "ament_cmake", but CMake did not find one.
    Could not find a package configuration file provided by "amend_cmake" with any of the following names:
    ament_cmakeConfig.cmake
    ament_cmake.config.cmake
    Add the installation prefix of "ament_cmake" to CMAKE_PREFIX_PATH or set "ament_cmake_DIR" to a directory containing one of the above files. If "ament_cmake" provides a separate development package or SDK, be sure it has been installed.
    Do you have any idea to solve this? ThankYou!

    • @robotmania8896
      @robotmania8896  Рік тому

      Hi Loo Waijun!
      Thanks for watching my video!
      Have you added “source /opt/ros/foxy/setup.bash” in your “.bashrc” file?

    • @loowaijun2960
      @loowaijun2960 Рік тому

      @@robotmania8896 It's works. Thanks a lot.
      After it, I run the command "ros2 topic echo /Yolov8_Inference“. But, it doesn't show anything. Do you have any idea about this?

    • @robotmania8896
      @robotmania8896  Рік тому

      @@loowaijun2960 I explain how to execute “ros2 topic list” command in the video. Please watch starting from 11:14.

    • @loowaijun2960
      @loowaijun2960 Рік тому

      @@robotmania8896 Ya, I followed it, but the recognized object information does not show.

    • @robotmania8896
      @robotmania8896  Рік тому

      @@loowaijun2960 Do boundary boxes appear in RVIZ when information in the terminal is not shown?