Is the loop closure scan and reference scan required because of a lack of precision with the odometery? Or is this typical of all robotics? I know there are plenty of issues for real world vs simulated location/trajectory in quadrupeds. If a more sensitive TOF sensor were used, maybe a rgb-d camera, would you still lack precision? Or could you include redundant positioning with a secondary IMU?
Very nice :) to take care of visual odometry drift with graph based approach. I have a doubt, can the onboard IMU be used to take care of this drift ? The main reason being for this graph to make a SLAM, it needs a closed graph path. Can IMU make a difference with a kalman filter maybe ?
Hey compliments on the research work. Just one question as you are using plain cardboard boxes and apparently there are not a lot of things in the maze, how is the drone being able to localize?
This is vey insightful thankyou
Very interesting video, awesome work!🤩👍
Is the loop closure scan and reference scan required because of a lack of precision with the odometery? Or is this typical of all robotics? I know there are plenty of issues for real world vs simulated location/trajectory in quadrupeds. If a more sensitive TOF sensor were used, maybe a rgb-d camera, would you still lack precision? Or could you include redundant positioning with a secondary IMU?
Very nice :) to take care of visual odometry drift with graph based approach. I have a doubt, can the onboard IMU be used to take care of this drift ? The main reason being for this graph to make a SLAM, it needs a closed graph path. Can IMU make a difference with a kalman filter maybe ?
Hey compliments on the research work.
Just one question as you are using plain cardboard boxes and apparently there are not a lot of things in the maze, how is the drone being able to localize?