Tightly-coupled Fusion of Global Positional Measurements in Optimization-based VIO (IROS 2020)

Поділитися
Вставка
  • Опубліковано 11 вер 2024
  • Motivated by the goal of achieving robust, drift-free pose estimation in long-term autonomous navigation, in this work we propose a methodology to fuse global positional information with visual and inertial measurements in a tightly-coupled nonlinear-optimization-based estimator. Differently from previous works, which are loosely-coupled, the use of a tightly-coupled approach allows exploiting the correlations amongst all the measurements. A sliding window of the most recent system states is estimated by minimizing a cost function that includes visual re-projection errors, relative inertial errors, and global positional residuals. We use IMU preintegration to formulate the inertial residuals and leverage the outcome of such algorithm to efficiently compute the global position residuals. The experimental results show that the proposed method achieves accurate and globally consistent estimates, with negligible increase of the optimization computational cost. Our method consistently outperforms the loosely-coupled fusion approach. The mean position error is reduced up to 50% with respect to the loosely-coupled approach in outdoor Unmanned Aerial Vehicle (UAV) flights, where the global position information is given by noisy GPS measurements. To the best of our knowledge, this is the first work where global positional measurements are tightly fused in an optimization-based visual-inertial odometry algorithm, leveraging the IMU preintegration method to define the global positional factors.
    Reference:
    G. Cioffi, D. Scaramuzza
    Tightly-coupled Fusion of Global Positional Measurements in Optimization-based Visual-Inertial Odometry
    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, 2020.
    PDF: rpg.ifi.uzh.ch/...
    Our research page on Visual and Inertial Odometry and SLAM:
    rpg.ifi.uzh.ch/...
    Affiliations:
    G. Cioffi and D. Scaramuzza are with the Robotics and Perception Group, Dep. of Informatics, University of Zurich, and Dep. of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland
    rpg.ifi.uzh.ch/

КОМЕНТАРІ • 7

  • @tillkroeger3995
    @tillkroeger3995 3 роки тому +1

    It seem the main advantage here is that we’re already taking advantage of global position information in the VIO front-end, as opposed to only in the state graph optimization stage, correct? The state graph should already eliminate all drift up to the most recent node/keyframe. With the present approach the drift in front of the head of the graph would also see lower drift, correct? If so, do you have practical examples in what use case is the accumulated VIO drift since the last key frame is truly relevant?

    • @giovannicioffi3970
      @giovannicioffi3970 3 роки тому

      Hi, sorry for the late reply. We use the front-end proposed in Forster et al., "SVO: Fast Semi-Direct Monocular Visual Odometry". The global measurements are included in the backend optimization.
      Including global positions in the vio algorithm allows to reduce the drift which accumulates over time because of noisy sensor measurements and modelling errors.
      In this work, we show how to solve the sensor fusion problem in a tightly-coupled manner. Tightly-coupling sensor measurements is known to give higher accuracy than loosely-coupling (e.g., see Leutenegger et al., "Keyframe-based visual-inertial odometry using nonlinear optimization")

  • @alexandrpetrov1110
    @alexandrpetrov1110 3 роки тому

    Thanks!

  • @ArthurYCLiu
    @ArthurYCLiu 4 роки тому +1

    Isn't VINS-Mono/Fusion already tightly-coupled VIO?

    • @xinkailiang7330
      @xinkailiang7330 4 роки тому

      good question

    • @giovannicioffi3970
      @giovannicioffi3970 4 роки тому +1

      In VINS-Fusion, global measurements are loosely-coupled with the visual and inertial measurements. Namely, the VIO algorithm (i.e., VINS-Mono) provides the relative pose updates which are subsequently aligned to the global frame in a pose-graph optimization containing the global measurements.
      In our method, visual, inertial and global measurements are jointly optimized.
      I hope this answers your question.

    • @ArthurYCLiu
      @ArthurYCLiu 4 роки тому

      @@giovannicioffi3970 Thanks for the reply, so it is a tightly-coupled method for all three sensors, instead of VI like VINS-Mono. Got it