DSO: Direct Sparse Odometry

Поділитися
Вставка
  • Опубліковано 11 вер 2024
  • DSO: Direct Sparse Odometry
    Jakob Engel, Vladlen Koltun, Daniel Cremers
    July 2016
    DSO Paper: arxiv.org/abs/1...
    DSO Website: vision.in.tum.d...
    DSO Code: github.com/Jak... (released November 2016)
    Dataset Paper: arxiv.org/abs/...
    Dataset Website: vision.in.tum....
    Dataset Code: github.com/tum...
    We propose a novel direct sparse visual odometry formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry -- represented as inverse depth in a reference frame -- and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on mostly white walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.

КОМЕНТАРІ • 33

  • @NilsStinnesbeck
    @NilsStinnesbeck 8 років тому +2

    Am joyfully waiting for the source code. We are testing LSD-SLAM right now in our University. Keep up the good work!

    • @lubomirdinchev334
      @lubomirdinchev334 8 років тому

      I'm not sure they'll be publishing it. However using the paper we could implement ourselves an open source implementation.

    • @NilsStinnesbeck
      @NilsStinnesbeck 8 років тому +2

      Well they say they will (see: vision.in.tum.de/research/vslam/dso?redirect=1, "We will publish the code for DSO as open-souce under a GPLv3 license. The link will be published here, as soon as it is ready.")

    • @lubomirdinchev334
      @lubomirdinchev334 8 років тому

      Well, thank you very much, kind gentleman. :)

  • @MrBrandyLova
    @MrBrandyLova 7 років тому +1

    The robustness and accuracy without even doing loop closures is impressive! How practical would it be to keep a pose graph similar to LSD-SLAM and re-activate nearby keyframes in order to do loop closure? Some applications would happily trade more processing time for globally consistent maps.

  • @mathiasgehrig1286
    @mathiasgehrig1286 8 років тому +6

    Incredible work! It's probably tricky to come up with a loop closure detector, though ;)

    • @chutsu_io
      @chutsu_io 7 років тому +1

      If the algorithm introduces such a small drift over such a long period of time would you actually need loop closure?

    • @mathiasgehrig1286
      @mathiasgehrig1286 7 років тому +2

      Yes, for several reasons:
      1. If you want to run it in real-time, the optimization window cannot be too large and you will accumulate more drift.
      2. There will be cases (tricky data sets, obviously not shown here) for which drift will be larger than expected.
      3. Maybe you want to use the map for localization for which you need a consistent map.
      One solution could be using a feature detection/extraction algorithm that runs in parallel. Then you could compute a relative constraint once a loop is found. I think LSD-SLAM does that. There are more elegant solutions for sure but it seems that research has shifted away from loop-closure detection towards localization in a given map. In that case, DSO is very interesting because it provides a semi-dense map that is partially invariant to appearance change.

  • @ArthurYCLiu
    @ArthurYCLiu 8 років тому +5

    That is so amazing.

  • @hyun91015
    @hyun91015 8 років тому +1

    Wow... What a marvelous it is!

  • @jumpjack2
    @jumpjack2 6 років тому +1

    Does it exists any public free SW implementing one of these algorithms?

  • @zjulion
    @zjulion 7 років тому +1

    impressive performance for outdoor scenes

  • @RayRift
    @RayRift Рік тому

    Can we do this on a metaquest Pro headset?

  • @erikm9768
    @erikm9768 7 років тому +6

    Is this using specialized hardware? precision is insane!

  • @moslem6767
    @moslem6767 2 роки тому

    what is the device that is used for recording the video and how the data is moved to the cpu??

  • @pelopelo1982
    @pelopelo1982 7 років тому +1

    That's awesome! I would like to dig into it ASAP

  • @ricebastard
    @ricebastard 7 років тому +1

    Wow this is awesome.
    I'll be studying localization and SLAM in Udacity's Self Driving Car Nanodegree program over the next couple months.
    I think I'll try implementing this method along side that.

  • @MrHaithink
    @MrHaithink 8 років тому +2

    Good job!

  • @CharlesVanNoland
    @CharlesVanNoland 6 років тому +1

    Magic.

  • @changkaizhao
    @changkaizhao 7 років тому +1

    but the 3D points do not match in beginning and last frames. how to solve it ?

  • @user-yn1wu3rk1r
    @user-yn1wu3rk1r 8 років тому +1

    DSO compare Smei-Direct Visual Odometry ??

    • @chrisanderson1513
      @chrisanderson1513 8 років тому +2

      They give a comparison of indirect, direct, sparse, and dense in the paper within the first two pages.

  • @among-us-99999
    @among-us-99999 5 років тому +3

    I wish i could get ANYTHING like this to work on either a Raspberry pi or a win10 computer.
    (Spoiler alert: I failed everytime)

  • @patrickrowsome5914
    @patrickrowsome5914 8 років тому +3

    Any code or paper?

  • @j0ule
    @j0ule 7 років тому +1

    Color me impressed

  • @sunliye8268
    @sunliye8268 8 років тому +1

    Suppper!