GMapping | ROS with Webots | Robotic Software PicoDegree | Part 4 | Best mapping package

Поділитися
Вставка
  • Опубліковано 14 вер 2021
  • 0:15 Introduction
    02:35 Glimpse of GMapping
    3:59 Implementation
    08:57 Start GMapping
    10:52 Mistake
    12:50 Localization
    14:50 GPS and IMU
    15:51 Localization Node (base_link)
    17:59 Lidar link
    19:58 map server package
    22:58 GMapping parameters
    #Gmapping #SLAM #Localization #Mapping
    Introduction Video to Localization, Mapping, SLAM, GMapping: • What is GMapping ? | T...
    The master launch file of the Robotics Picodegree launches the following:
    1. Home webots world which contains a home and our stark robot. (1st Video)
    2. Robot_description - robot urdf or the xacro file with the description of the different frames and links in the robot. (2nd Video)
    3. Teleop - Navigate the robot wheels and actuators using keyboard keys and publish dynamic transforms of the continuous joints on the robot such as the linear joint, camera etc. (3rd Video)
    4. Mapping (present video) - Localization node that publishes the base_link and the lidar link transforms required as inputs to perform gmapping, and then we load the saved map in the yaml format using the map server package.
    What does the gmapping map topic actually represents?
    As the robot moves around, the map continues to build. This is called a 2D occupancy grid where the environment is represented as a regular grid of cells, where the value of each cell encodes its state as free, occupied, or undefined i.e unmapped. Here the greenish grey cells have unknown occupancy values, the white cells are free and black cells are occupied.
    This occupancy value of a cell is determined using a probabilistic approach that has the laser data as input to estimate the distances from the lidar to surrounding objects. Everytime a new measurement is made, the cell value is updated using a bayesian approach. The resulting model can be used directly as a map of the environment in navigation tasks as path planning, obstacle avoidance, and pose estimation. The main advantages of this method are: it is easy to construct and it can be as accurate as necessary.
    How do we provide pose estimation of robot in webots?
    We know that robot localization algorithms uses information from different sensors. The type of sensor used can range from Relative to Absolute Positioning Measurement. Relative measurements include sensors such as wheel odometry and IMU, where the measurements are incrementally used in conjunction with the robot’s motion model to find the robot’s current location. Though these methods are very precise; wheel slippage, and drifts can lead to increased errors.
    Absolute measurements are more direct and mostly include sensors that estimate a distance by calculating phase difference between the wave sent and rebounded.
    This method also includes the GPS. These methods are independent of previous location estimates and hence the error does not grow unbounded like in the previous case.
    Now each sensor has it’s own limitations. That's the reason its a critical decision to choose the right sensors that can complement each other based on the application along with good sensor fusion techniques.
    A simple google search will show you that there are several approaches such as kalman filter, particle, morkov etc to fuse sensor information. and form an estimate of the robot’s position and orientation within the map. The different techniques use probabilistic algorithms to deal with problems such as noisy observations, sensor aliasing etc to generate not just an estimate of the robot’s pose and orientation but also a measure of the uncertainty/confidence associated with that location’s estimate. In our project we make use of GPS and IMU sensors available in webots that we have mounted on the robot, to provide us with the position and orientation of the robot respectively.
    During the process of mapping:
    1. It’s best to complete the mapping process with the lidar at the same static location wirt to the base_link, of course you can change this static link later once the map is made.
    2. While performing slam, if you pick up and move the robot without the wheels actually moving, it is a big problem. This is because the the odometry data will remain constant, leading to a mismatch in where the robot actually is, and where it thinks it is. This will lead to wrong pose estimation or localization.
    As you experiment with different slam algorithms, you will realise that these techniques come with a lot of challenges. Even small errors such as odometry can have large effects on later position estimates. For building consistent maps the robot has to establish correspondence between the present and past positions, which is particularly difficult when closing a cycle. Factors such as hardware slippage, robot speed, frequency of map update etc can have varying effects on the map output. Optimization of the different parameters can effect not just the time taken but also the accuracy of the generated map.

КОМЕНТАРІ • 18

  • @mateocortesduarte9936
    @mateocortesduarte9936 4 місяці тому

    thank you so much, is clear and well explained

  • @yogeshkumbhar581
    @yogeshkumbhar581 Рік тому +2

    Great Video, It takes lot of efforts to make such video, thanks, please keep posting such videos

  • @what_about_mike
    @what_about_mike 2 роки тому +1

    Very good description of contents guys

  • @dreamdesign6292
    @dreamdesign6292 2 роки тому +1

    Excellent !

  • @chanchalbhartia5829
    @chanchalbhartia5829 2 роки тому +1

    👌👌 well explained

  • @dreamdesign6292
    @dreamdesign6292 2 роки тому +1

    👍

  • @NurmukhanAimanov
    @NurmukhanAimanov 4 місяці тому

    How did you implement odom that when moving inside a 3d simulation (gazebo) that doesn't lose localization?
    When I use navigation, after manual movement, and amcl loses location, and I'll have to give with rviz.
    Do you have a group or community on social networks that will help you learn ros1 and ros2?

  • @patrickkusuma8352
    @patrickkusuma8352 2 роки тому

    Nice explanation and also good video !
    Do you have any project that using real environment with gmapping method ?

  • @sarasaori5754
    @sarasaori5754 2 роки тому +1

    Very good video!
    Is it also recommended to use Gmapping for 3D mapping, like when using UAVs?

    • @coolrobotics
      @coolrobotics  2 роки тому +2

      Nope, Gmapping is only used for 2D mapping.

    • @sarasaori5754
      @sarasaori5754 2 роки тому +1

      ​@@coolrobotics Oh, I see. Do you have any recommendation of a approach similar to Gmapping, but for 3D mapping, in Webots?

    • @coolrobotics
      @coolrobotics  2 роки тому +1

      We have not done 3D mapping but you can try.
      wiki.ros.org/rtabmap_ros

    • @sarasaori5754
      @sarasaori5754 2 роки тому

      @@coolrobotics Thank you very much!

  • @doobuu2999
    @doobuu2999 10 місяців тому

    Could you please provide the code for this project?