- 53
- 58 418
Kenji Koide
Приєднався 28 жов 2019
Відео
GLIM on Jetson Orin Nano
Переглядів 1,2 тис.2 місяці тому
Jetson Orin Nano (15W) Configuration : OdometryEstimationGPU SubMapping (GPU) GlobalMapping (GPU) github.com/koide3/glim
Bundle Adjustment Factor [gtsam_points]
Переглядів 5443 місяці тому
(Coming soon) github.com/koide3/gtsam_points k_koide3
Continuous Time ICP Factor [gtsam_points]
Переглядів 3953 місяці тому
(Coming soon) github.com/koide3/gtsam_points k_koide3
SE3 BSpline Interpolation [gtsam_points]
Переглядів 2243 місяці тому
(Coming soon) github.com/koide3/gtsam_points k_koide3
Incremental VoxelMap Update and Normal Estimation [gtsam_points]
Переглядів 5043 місяці тому
(Coming soon) github.com/koide3/gtsam_points k_koide3
Colored ICP Factor [gtsam_points]
Переглядів 2323 місяці тому
(Coming soon) github.com/koide3/gtsam_points k_koide3
[GLIM] Visual-LiDAR-IMU SLAM on a Drone (NTU-VIRAL)
Переглядів 1,7 тис.3 місяці тому
(Coming soon) github.com/koide3/glim k_koide3
[GLIM] Flatwall experiment with Livox Avia (Complete point cloud degeneration)
Переглядів 6133 місяці тому
(Coming soon) github.com/koide3/glim k_koide3
[GLIM] Mapping with Azure Kinect
Переглядів 9963 місяці тому
(Coming soon) github.com/koide3/glim k_koide3
[GLIM] Mapping with various range sensors (Same parameter setting for all)
Переглядів 2,1 тис.3 місяці тому
The same parameters are used for all the sensors. (Coming soon) github.com/koide3/glim k_koide3
[GLIM] Outdoor driving test with Livox MID360 (Processing speed: x14 of real-time)
Переглядів 1,6 тис.3 місяці тому
[GLIM] Outdoor driving test with Livox MID360 (Processing speed: x14 of real-time)
[GLIM] Map correction with offline_viewer
Переглядів 8623 місяці тому
[GLIM] Map correction with offline_viewer
MegaParticles: 6-DoF Monte Carlo Localization (Closeup View)
Переглядів 7095 місяців тому
MegaParticles: 6-DoF Monte Carlo Localization (Closeup View)
[ICRA2024] MegaParticles : 6-DoF Monte Carlo Localization with One Million Particles
Переглядів 2 тис.5 місяців тому
[ICRA2024] MegaParticles : 6-DoF Monte Carlo Localization with One Million Particles
Scan matching speed comparison (small_gicp vs Open3D)
Переглядів 7306 місяців тому
Scan matching speed comparison (small_gicp vs Open3D)
GLIL robustness test (dynamic objects and motion)
Переглядів 3367 місяців тому
GLIL robustness test (dynamic objects and motion)
Quadruped robot (MPC-based planning) [Student Project]
Переглядів 2587 місяців тому
Quadruped robot (MPC-based planning) [Student Project]
Quadruped robot (Gesture-input) [Student Project]
Переглядів 927 місяців тому
Quadruped robot (Gesture-input) [Student Project]
Quadruped robot (Gesture-input & MPC-based planning) [Student Project]
Переглядів 3227 місяців тому
Quadruped robot (Gesture-input & MPC-based planning) [Student Project]
MPC-based Path Planning [Student project]
Переглядів 3268 місяців тому
MPC-based Path Planning [Student project]
😂 This is funny
Awesome!!😀
your new paper published already?
Impressive work! But the result is with A100 gpu which not very portable. I am wondering have you ever run it on a less potent GPU?
Yes, it is exactly what we are now working on! I expect it will be feasible on a Jetson.
@@kenjikoide6076 That would be super cool. Really looking forward!
Great video :) Is this localization module based on hdl_global_localization or 3D-BBS?
It's based on hdl_global_localization.
The positioning effect is very silky! Expect to see code updates for this section soon
Thanks! This feature is already available in GLIM!
It's great! I'm following your works, thanks for your open-source! Recently, I want to test it like this video, but I don't find the corresponding demo bag and the config file of mid360, will you update it later?
Thanks for your comment. I think I will upload a demo bag file for MID360 later.
Kenji is GOAT!
Nice job!
Yes you feel it.
You can feel it too.
Is GLIM contain location function? if not, is that any shedule for sharing the location function part? Tks in advance.
Can it run on OrangePi5?
I think yes. The configuration shown in the video description uses only ~40% of the CPU resource of Khadas VIM3.
Thank Kenji
where is dataset?
Super impressive. Great work! Thanks for open sourcing it!
Офигенно! Отличная работа!
good job
OMG it's released!
😍
❤
❤
Always a big fan of your work. Is this in preparation for a new publication?
Thanks! Yes, it corresponds to our new paper that is going to come out in a few weeks.
Kenji, is this rviz2 on the right side? Your visualization is so cool
Thanks. The left is the usual rviz2 and the right is our original viewer (github.com/koide3/iridescence).
this is sick
Super excited for the release :)
Great work. I'm guessing you are fusing the results of IMU's estimate of position and SFM data from camera. How much drift are you getting as of now?
It fuses all Visual-LiDAR-IMU constraints on a unified factor graph. Because it was an easy setup for the LiDAR, it got almost no drift at all.
@@kenjikoide6076 If LiDAR's also participating in Motion estimation, it sure would be very accurate. I would bet that if you used only the data from LiDAR you would probably get the same estimated motion. I was thinking that LiDAR was used as ground truth.
@@shivavarunadicherla Yes, visual constraints brought only minor accuracy gain in this dataset indeed. This is just a demonstration. What we are truly aiming for is overcoming situations where point clouds become completely degenerate (e.g., tunnels), and we've confirmed that visual constraints greatly improves the reliability in such situations.
There is another one new dataset called MCD. offers higher challenge than this.
Thanks, I'll definitely try MCD. NTU-VIRAL was easy and didn't bring much insight.
@@kenjikoide6076 ntu viral original sequences are easy. The new additional sequence spms one are much tougher. So far only selected few lio can run
Lol i flu this sequences
Trying TRO again?
No, it's accepted to another one :)
GitHub 404 😢
Sorry, we are finalizing the repository, it'll come out in a few weeks.
🥰
Very impressive!
I appreciate your work very much. Can you open source or contact me? I am studying related aspects recently
I appreciate your work very much. Can you open source or contact me? I am studying related aspects recently😍
めっちゃツヨい
Nice :) Which SLAM did you use for that? FAST-LIO or something else?
That was wonderful work. Congarts my master 🙏
Nice Work! Looking forward to your paper being published!
Impressive. Assume this is with a depth camera. Which one?
It's an azure kinect.
and Livox MID360 was used in the outdoor experiment.
Unbelievable!😍
This is soooo cool🎉
I really appreciate your contributions to the open-source community😍😍
Nice! For map based navigation, whenever I tried to match the point cloud to the map, it gave me a wrong pose estimation due to wrong correspondences as there are too many match candidates. So I made the map as pairs of (pose, LiDAR scan) and matched to the LiDAR scan at that pose, which made the "map" file too big. Great work! I should read your paper to re-implement the navigation module.
a $750+ football
amazing work
Wow that was cool!
will code release after publish this paper?
The global localization part is available at: github.com/koide3/hdl_global_localization
@@kenjikoide6076 thanks, it's cool
there are too many monsters................................
awesome!!!! Code release is not considered?
The global localization part is available at: github.com/koide3/hdl_global_localization