- 11
- 18 071
SLAMLab
Приєднався 13 лип 2018
HARA: A Hierarchical Approach for Robust Rotation Averaging (CVPR 2022)
Paper: arxiv.org/abs/2111.08831
Code: github.com/sunghoon031/HARA
Abstract:
We propose a novel hierarchical approach for multiple rotation averaging, dubbed HARA. Our method incrementally initializes the rotation graph based on a hierarchy of triplet support. The key idea is to build a spanning tree by prioritizing the edges with many strong triplet supports and gradually adding those with weaker and fewer supports. This reduces the risk of adding outliers in the spanning tree. As a result, we obtain a robust initial solution that enables us to filter outliers prior to nonlinear optimization. With minimal modification, our approach can also integrate the knowledge of the number of valid 2D-2D correspondences. We perform extensive evaluations on both synthetic and real datasets, demonstrating state-of-the-art results.
Code: github.com/sunghoon031/HARA
Abstract:
We propose a novel hierarchical approach for multiple rotation averaging, dubbed HARA. Our method incrementally initializes the rotation graph based on a hierarchy of triplet support. The key idea is to build a spanning tree by prioritizing the edges with many strong triplet supports and gradually adding those with weaker and fewer supports. This reduces the risk of adding outliers in the spanning tree. As a result, we obtain a robust initial solution that enables us to filter outliers prior to nonlinear optimization. With minimal modification, our approach can also integrate the knowledge of the number of valid 2D-2D correspondences. We perform extensive evaluations on both synthetic and real datasets, demonstrating state-of-the-art results.
Переглядів: 764
Відео
Rotation-Only Bundle Adjustment (CVPR 2021)
Переглядів 1,4 тис.3 роки тому
Paper: arxiv.org/abs/2011.11724 Code: github.com/sunghoon031/ROBA Abstract: We propose a novel method for estimating the global rotations of the cameras independently of their positions and the scene structure. When two calibrated cameras observe five or more of the same points, their relative rotation can be recovered independently of the translation. We extend this idea to multiple views, the...
Information-Driven Direct RGB-D Odometry - CVPR 2020 (oral)
Переглядів 1,5 тис.4 роки тому
Results for the paper: Alejandro Fontán, Javier Civera, Rudolph Triebel Information-Driven Direct RGB-D Odometry CVPR 2020 (oral) Paper draft: vision.in.tum.de/_media/spezial/bib/fontan20information.pdf Project web: rmc.dlr.de/rm/en/staff/alejandro.fontanvillacampa/IDNav Abstract: This paper presents an information-theoretic approach to point selection for direct RGB-D odometry. The aim is to s...
Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition - CVPR 2020 (oral)
Переглядів 2094 роки тому
5-minute oral presentation for the CVPR 2020 paper: Frederik Warburg, Søren Hauberg, Manuel Lopez-Antequera, Pau Gargallo, Yubin Kuang, Javier Civera Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition CVPR 2020 (oral) Paper draft: research.mapillary.com/img/publications/CVPR20c.pdf Dataset: www.mapillary.com/dataset/places Abstract: Lifelong place recognition is an essen...
Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition - CVPR 2020 (oral)
Переглядів 2374 роки тому
1-minute spotlight presentation for the CVPR 2020 paper: Frederik Warburg, Søren Hauberg, Manuel Lopez-Antequera, Pau Gargallo, Yubin Kuang, Javier Civera Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition CVPR 2020 (oral) Paper draft: research.mapillary.com/img/publications/CVPR20c.pdf Dataset: www.mapillary.com/dataset/places Abstract: Lifelong place recognition is an ...
Condition-Invariant Multi-View Place Recognition (Short Video -- No Pauses)
Переглядів 1495 років тому
Experimental Results on the Partitioned Nordland We show the query sequence from one season, and the places retrieved by our method in the other three seasons Paper title: Condition-Invariant Multi-View Place Recognition Authors: Jose M. Facil, Daniel Olid, Luis Montesano and Javier Civera Project Web: webdiis.unizar.es/~jmfacil/cimvpr/ Paper: arxiv.org/pdf/1902.09516.pdf Status: Under Review
Condition-Invariant Multi-View Place Recognition (Short Video)
Переглядів 1595 років тому
Experimental Results on the Partitioned Nordland We show the query sequence from one season, and the places retrieved by our method in the other three seasons Paper title: Condition-Invariant Multi-View Place Recognition Authors: Jose M. Facil, Daniel Olid, Luis Montesano and Javier Civera Project Web: webdiis.unizar.es/~jmfacil/cimvpr/ Paper: arxiv.org/pdf/1902.09516.pdf Status: Under Review
Condition-Invariant Multi-View Place Recognition (Extended Video)
Переглядів 2215 років тому
Experimental Results on the Partitioned Nordland We show the query sequence from one season, and the places retrieved by our method in the other three seasons Paper title: Condition-Invariant Multi-View Place Recognition Authors: Jose M. Facil, Daniel Olid, Luis Montesano and Javier Civera Project Web: webdiis.unizar.es/~jmfacil/cimvpr/ Paper: arxiv.org/pdf/1902.09516.pdf Status: Under Review
EmptyCities
Переглядів 7106 років тому
In this work we present an end-to-end deep learning framework to turn images that show dynamic content, such as vehicles or pedestrians, into realistic static frames. These generated images can be used for applications such as virtual reality or vision-based robot localization purposes. Code and more information about the project can be found at bertabescos.github.io/EmptyCities.
Loosely-Coupled Semi-Direct Monocular SLAM
Переглядів 7 тис.6 років тому
[1] SOURCE CODE AVAILABLE AT: github.com/sunghoon031/LCSD_SLAM.git [2] PAPER AVAILABLE AT: arxiv.org/abs/1807.10073 [3] PAPER ABSTRACT: We propose a novel semi-direct approach for monocular simultaneous localization and mapping (SLAM) that combines the complementary strengths of direct and feature-based methods. The proposed pipeline loosely couples direct odometry and feature-based SLAM to per...
DynaSLAM
Переглядів 6 тис.6 років тому
Published as: Berta Bescos, José M Fácil, Javier Civera, José Neira, “DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes”, RA-L 2018 Open-source code is available on github.com/bertabescos/DynaSLAM. More information about the project can be found at bertabescos.github.io/DynaSLAM. The assumption of scene rigidity is typical in SLAM algorithms. Such a strong assumption limits the use o...
since we always need a final bundle adjustment, does it make any difference?
🏆👍Nice!👍🏆
Hi! couple of questions: 1) What monocular camera did you use? 2) Have you encountered drift for large maps? 3) Can it perform localization in real-time? Thanks!
Nice, can't wait to try it. Does it also work on RGBD?
it is which dataset?? and by RGBD style or mono style???
good!
Very impressive !