- 40
- 24 910
SAIR Lab
United States
Приєднався 5 січ 2023
Spatial AI & Robotics Lab
Roboranking: A Robotics Faculty Hub and University Ranking System
Video Description
Welcome to Roboranking (roboranking.org), A Robotics Faculty Hub & Ranking System.
Roboranking is a ranking system for the robotics community to help students match universities and faculties. It also provides faculty greater visibility for collaboration opportunities.
Visit our website: roboranking.org to check it out!
Music track: Option by Aylex
Source: freetouse.com/music
Royalty Free Music for Video (Safe)
Welcome to Roboranking (roboranking.org), A Robotics Faculty Hub & Ranking System.
Roboranking is a ranking system for the robotics community to help students match universities and faculties. It also provides faculty greater visibility for collaboration opportunities.
Visit our website: roboranking.org to check it out!
Music track: Option by Aylex
Source: freetouse.com/music
Royalty Free Music for Video (Safe)
Переглядів: 375
Відео
iKap: Kinematics-aware Planning with Imperative Learning
Переглядів 273Місяць тому
Website: sairlab.org/iKap/ Preprint: arxiv.org/abs/2412.09496 We propose iKap, a novel vision-to-planning system that integrates the robot's kinematic model directly into the learning pipeline. iKap employs a self-supervised learning approach and incorporates the state transition model within a differentiable bi-level optimization framework. This integration ensures the network learns collision...
[IROS'24] PhysORD: A Neuro-Symbolic Approach of Physics-infused Motion Prediction in Offroad Driving
Переглядів 1163 місяці тому
Title: PhysORD: A Neuro-Symbolic Approach of Physics-infused Motion Prediction in Offroad Driving Authors: Zhipeng Zhao, Bowen Li, Yi Du, Taimeng Fu, Chen Wang. Paper: arxiv.org/abs/2404.01596 Code: github.com/sair-lab/PhysORD Website: sairlab.org/physord/ Abstract: We present PhysORD, a neural-symbolic approach integrating the conservation law, i.e., the Euler-Lagrange equation, into data-driv...
[IROS'24] iMTSP: Solving Min-Max Multiple Traveling Salesman Problem with Imperative Learning
Переглядів 843 місяці тому
Title: iMTSP: Solving Min-Max Multiple Traveling Salesman Problem with Imperative Learning Authors: Yifan Guo, Zhongqiang Ren, Chen Wang. Website: sairlab.org/iMTSP/ Code: github.com/sair-lab/iMTSP Paper: arxiv.org/abs/2405.00285 Abstract: This paper considers a Min-Max Multiple Traveling Salesman Problem (MTSP), where the goal is to find a set of tours, one for each agent, to collectively visi...
iWalker: Imperative Visual Planning for Walking Humanoid Robot
Переглядів 3644 місяці тому
Title: iWalker: Imperative Visual Planning for Walking Humanoid Robot Authors: Xiao Lin, Yuhao Huang, Taimeng Fu, Xiaobin Xiong, Chen Wang. Project Page: sairlab.org/iwalker/ Paper: arxiv.org/abs/2409.18361 Abstract: Humanoid robots, with the potential to perform a broad range of tasks in environments designed for humans, have been deemed crucial for the basis of general AI agents. When talking...
[TRO 2025] AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System
Переглядів 1,2 тис.5 місяців тому
Title: AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System Authors: Kuan Xu, Yuefan Hao, Shenghai Yuan, Chen Wang, Lihua Xie. Code: github.com/sair-lab/AirSLAM Paper: arxiv.org/abs/2408.03520 Abstract: We present an efficient visual SLAM system designed to tackle both short-term and long-term illumination challenges. Our system adopts a hybrid approach that combines deep...
Non-iterative SLAM for Warehouse Robots Using Ground Textures
Переглядів 856Рік тому
A novel visual SLAM method for the warehouse robot with a single downward-facing camera using ground textures. Code: github.com/sair-lab/ni-slam
[ICCV'23] Michael Kaess: Learning for Sonar and Radar SLAM
Переглядів 828Рік тому
Michael Kaess is an Associate Professor in the Robotics Institute at Carnegie Mellon University. His research focuses on probabilistic methods for robot perception, in particular efficient algorithms for navigation, mapping and localization. Prior to joining CMU, he was a Research Scientist and a Postdoctoral Associate at MIT. He received the Ph.D. and M.S. degrees in computer science from Geor...
[ICCV'23] Ayoung Kim: Advancing SLAM with Learning
Переглядів 655Рік тому
This seminar will discuss learning high-level features and combining them with the conventional SLAM pipeline. This talk will introduce three innovative approaches. Firstly, we shall discuss the strategic use of cameras to discern high-level features, providing an alternative to the more conventional point-based methodologies, which may not be as productive. Secondly, we will deliberate on inte...
[ICCV'23] Letizia Gionfrida: Advancing the Role of SLAM Based Active Mapping in Wearable Robotics
Переглядів 132Рік тому
Envisioning a future where people with disabilities can lead more independent lives is increasingly feasible with wearable robotics. Currently, the prevailing method for understanding user intentions relies on inertial sensors and surface electromyography, which provide limited data and can make user control seem complex. Harnessing computer vision offers rich and immediate information about th...
[ICCV'23] Shubham Tulsiani: Probabilistic Pose Prediction
Переглядів 352Рік тому
Shubham Tulsiani is an Assistant Professor at Carnegie Mellon University in the Robotics Institute, where he is a part of the Computer Vision group. He is interested in building perception systems that can infer the spatial and physical structure of the world they observe. Slides: Download from sairlab.org/iccv23/
[ICCV'23] Tim Barfoot: Learning Perception Components for Long Term Path Following
Переглядів 370Рік тому
Prof. Timothy Barfoot (University of Toronto Robotics Institute) works in the area of autonomy for mobile robots targeting a variety of applications. Tim is an IEEE Fellow and has received two paper awards at the IEEE International Conference on Robotics and Automation (ICRA 2010, 2021). He sits on the Editorial Boards of the International Journal of Robotics Research (IJRR) and Field Robotics ...
[ICCV'23] Daniel Cremers: From Monocular SLAM to 3D Dynamic Scene Understanding
Переглядів 719Рік тому
Prof. Daniel Cremers’ research interests cover Computer vision, machine learning & deep networks, mathematical image analysis, and so on. He is the Chair of Computer Vision & Artificial Intelligence in TUM. On March 1st 2016, Prof. Cremers received the Gottfried Wilhelm Leibniz Award, the biggest award in German academia. According to Google Scholar, Prof. Cremers has an h-index of 117 and his ...
[ICCV'23] Andrew Davison: Distributed Estimation and Learning for Robotics
Переглядів 279Рік тому
Andrew Davison holds the position of Professor of Robot Vision at the Department of Computing, Imperial College London, and lead the Dyson Robotics Laboratory at Imperial College where they are working on vision and AI technology for next generation home robotics. He also leads the Robot Vision Research Group though most of his activity is now within the Dyson Lab. Slides: Download from sairlab...
[ICCV'23] Chen Wang: Imperative SLAM and PyPose Library for Robot Learning
Переглядів 542Рік тому
[ICCV'23] Chen Wang: Imperative SLAM and PyPose Library for Robot Learning
[ICCV'23] Marc Pollefeys: Visual Localization and Mapping From Classical to Modern
Переглядів 1,9 тис.Рік тому
[ICCV'23] Marc Pollefeys: Visual Localization and Mapping From Classical to Modern
[ICCV'23] Ji Zhang: From Lidar SLAM to Full-scale Autonomy and Beyond
Переглядів 650Рік тому
[ICCV'23] Ji Zhang: From Lidar SLAM to Full-scale Autonomy and Beyond
[ICCV'23] Luca Carlone: From SLAM to Spatial Perception
Переглядів 749Рік тому
[ICCV'23] Luca Carlone: From SLAM to Spatial Perception
[ICCV'23] Maurice Fallon: Robust Multi-Sensor SLAM with Learning and Sensor Fusion
Переглядів 893Рік тому
[ICCV'23] Maurice Fallon: Robust Multi-Sensor SLAM with Learning and Sensor Fusion
[Live Demo] AirLoc: Object Based Indoor Relocalization
Переглядів 90Рік тому
[Live Demo] AirLoc: Object Based Indoor Relocalization
[RA-L 2023] FDCT: A Fast Depth Completion Network for Transparent Objects
Переглядів 98Рік тому
[RA-L 2023] FDCT: A Fast Depth Completion Network for Transparent Objects
Introducing PyPose: A Library for Differentiable Robotics
Переглядів 4,6 тис.Рік тому
Introducing PyPose: A Library for Differentiable Robotics
[ECCV 2022] AirDet: Few-Shot Detection without Fine-tuning for Autonomous Exploration
Переглядів 73Рік тому
[ECCV 2022] AirDet: Few-Shot Detection without Fine-tuning for Autonomous Exploration
[IROS 2023] AirVO: An Illumination-Robust Point-Line Visual Odometry
Переглядів 450Рік тому
[IROS 2023] AirVO: An Illumination-Robust Point-Line Visual Odometry
[IROS 2023] AirLine: Efficient Learnable Line Detection with Local Edge Voting
Переглядів 614Рік тому
[IROS 2023] AirLine: Efficient Learnable Line Detection with Local Edge Voting
[ECCV 2020 Oral] Visual Memorability for Robotic Interestingness via Unsupervised Online Learning
Переглядів 372 роки тому
[ECCV 2020 Oral] Visual Memorability for Robotic Interestingness via Unsupervised Online Learning
[ICRA 2020] Intensity Scan Context: Coding Intensity and Geometry Relations for Loop Closure
Переглядів 282 роки тому
[ICRA 2020] Intensity Scan Context: Coding Intensity and Geometry Relations for Loop Closure