SLAM++: Simultaneous Localisation and Mapping at the Level of Objects

Поділитися
Вставка
  • Опубліковано 11 вер 2024
  • We present the major advantages of a new 'object oriented' 3D SLAM paradigm, which takes full advantage in the loop of prior knowledge that many scenes consist of repeated, domain-specific objects and structures. As a
    hand-held depth camera browses a cluttered scene, real-time 3D object recognition and tracking provides 6DoF camera-object constraints which feed into an explicit graph of objects, continually refined by efficient pose-graph optimisation. This offers the descriptive and predictive power of SLAM systems which perform dense surface reconstruction, but with a huge representation compression. The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions. We demonstrate real-time incremental SLAM in large, cluttered environments, including loop closure, relocalisation and the detection of moved objects, and of course the generation of an object level scene description with the potential to enable interaction.

КОМЕНТАРІ • 15

  • @GameEssentialsTutorials
    @GameEssentialsTutorials 6 років тому

    This is one of the best implementations of SLAM and AR i've ever seen. Now pair this with object recognition AI(machine learning) Also, if another sensor like leap motion was detecting your hands, you could push the virtual chair and the real chair at the same time.

  • @genkidama7385
    @genkidama7385 11 років тому +1

    i dreamed of a system like this 1 year ago, instead of building a point cloud, recognizing objects placed in the scene from a database of furnitures is the best solution.

  • @myperspective5091
    @myperspective5091 7 років тому +1

    This is one of my top 5 favorite videos. I had the same idea. I called it assumption mapping because it assumes what the rest of an object looks like based off a catalog of objects. did you do the same thing with the walls or stairs too?

  • @roidroid
    @roidroid 11 років тому

    Yes! Finally! :D

  • @dizeng6632
    @dizeng6632 6 років тому

    Looks great, but how do you handle different models of objects?

  • @agryson
    @agryson 11 років тому

    Fantastic, has this system been implemented on any autonomous robots yet?

  • @guyboy625
    @guyboy625 10 років тому +1

    Combine this with DTAM to create an object database on the fly and recognize repeated objects :)

  • @iVEvangelist
    @iVEvangelist 11 років тому

    Amazing !

  • @myperspective5091
    @myperspective5091 7 років тому

    Nice.
    Now add an individual object identification location tag to find specific items in that group.

    • @myperspective5091
      @myperspective5091 7 років тому

      Then add a cluster divide program to shuffle around the objects to move them out of the way to separate and single out objects.

  • @guyboy625
    @guyboy625 10 років тому +2

    Something that bugs me a little is that almost all of these kinds of demos have terrible lighting on the inserted objects.

  • @getamanmathur
    @getamanmathur 8 років тому

    Is an implementation of this available?

  • @ErickBraganza
    @ErickBraganza 9 років тому +5

    Ubuntu at 0:10

    • @Mehdital89
      @Mehdital89 9 років тому +3

      +Erick Braganza and?