Awesome video! Thanks for all the great explanation. It would be great if you could talk about marginalization of the Hessian Matrix in a sliding window method.
Prof. Cyrill, many thanks for uploading this wonderful set of videos. If I may, I would like to ask a few questions (responses from the audience are also well appreciated): a) I found it hard to understand the physical meaning of matrix H, hence it feels the addition of 1 in H11 to fix x1 somehow arbitrary to me. Although I am not sure, it could potentially be possible to fix x1 by introducing a fictitious measurement such as: e11=z1 - x1; where z1=0 (to fix it to zero) and Ω11 a very high number (infinity?) as we want to provide confidence about this measurement. However, by doing so we also add a b11 term (not only H11), which seems quite different from what has been described in the video. b) My understanding says that the optimization algorithm used is called Newton's method (Newton-Raphson), whereas here in the slides I see it is named as Gauss-Newton method. I am mistaken? c) I was trying to find this t2v Matlab function but did not really manage - could anyone provide a link (or any other source of information) where I can find what it does and how it does it? d) regarding the mapping, does the vehicle store all the measurements and re-generates the map from scratch every time the optimization step is performed? I am sorry for the quite long comment. Thank you.
Thank you for making and sharing these videos. I can understand the measurement Zij-1 in error function , but curious about how can we get xi-1xj , is it random or deduce from loop-closing?
Thanks for the great Video and Series! one question I have is related to the definition of the error funtion, in the pose-graph setting: it is the norm of the residual rotation + translation. but it seems that specifiying a covariance is a must since rotation and translation are of different units. R there good examples/ mechanism to construct the covariance?
Thanks for the video. But I have a small question. In the example that you have given you have calculated the actual edge as distance between two nodes. What should be the criteria for 2-D or 3-D nodes?
Thanks so much for sharing. The illustration of the pose graph with two poses helps me to understand this specific case. But this is related to the moment relocalization is detected, right? So the graph says you are in x, but the measurements of the loop closure say you should be around "omega". And the difference in these positions is the error, right? But in this case, due to the relocalization, I can know where I should be, but how can I find the error for the other previous pose estimations? Thanks in advance!
In video at 29.50, there is an explanation as follows "There is a difference between what observation tells me and what the current graph configuration tells me". Where did this current graph configuration come from ? How to derive/calculate/arrive at this current graph configuration ?
Thank you for your course. As the error function has always the same form between two nodes, do Jacobians are always the same ? and do they takes only 1 or -1 values ? It's hard to understand this mathematic abstraction.
The information matrix from the observation basically tells you how good (precise) your sensor is. Either the sensor has specs (precision of the depth reading for example) or it depends on how accurately you can determine the orientation to the object, which may relate directly to image/feature points that you extract.
Hi Cyrill! Great lecture! I have two minor questions: 1. Are there any thinkable benefits to using this method prior to having achieved the first loop closure? 2. Assuming I have finished the mapping process, do you have a reference on how to use the pose graph for pure localization? Or does one usually just use the corrected map for things such as MCL?
I have thought a little bit about these questions. and realised that: 1- I need to check more for question 1 2- Pose graph and MCL or KF or others are just correction tools for the localization purpose. the main function of the localization is the motion tracking from the state t-1 and t which can be done using The ICP or NDT algorithm or any motion tracking algorithm (registration for the lidar scan and the matching camera's images) this is what I understand. Plz, feel free to correct if you have more information.
Truly great content. I enjoy the "light bulb" moments with the new content 2.0 "5 Minutes with Cyrill". I can only imagine the model for learning on the content 3.0? Maybe paid online access to seminars (1 day class) for in depth solving these equations for each of the topics. Keep going!
Hey Cyrill, thanks for the great materials you post on UA-cam! I have a question regarding the slide "Building the linear system" (50:22) as well as the consecutive "Algorithm" (51:38). Do the indices i,j at the matrix H bar and vector b bar correspond to matrix/vector indices or are these completely different matrices/vectors? And what exactly is the form of those matrices H_bar and b_bar? As I understand, A_ij and B_ij are matrices themselves, thus H_ij bar is a matrix and b_ij a vector themselves. Furthermore, I'd like to understand, how exactly we obtain the matrix H and the vector b on the latter slide, given the matrices H_ij bar and vector b_ij bar. Are these the same? Greatly appreciate your help!
Hi Prof. Stachniss. Another great video. You've taught me a lot. I do have 1 question. In the 1D problem when you need to "fix" the first x1 since everything is relative - clearly H12 is rank deficient since it is the outer product of a 2D vector. The rank has to be 1. This, as you said, is due to the relative nature of the update. Then you add (matlab notation) [1 0 ; 0 0] to H. Can you explain why this is equivalent to fixing the first node to some value? I see that indeed in the end delta_x = (0 1) therefore the first node is not changing positions. How did you know to add [1 0; 0 0] to H to achieve this? My only inclination here is to say that for any [a; b] [1 0; 0 0] * [a ; b] = [a ; 0] (again in matlab notation) so that indeed "a" remains the same. Is this the rationale? Beyond that (and even including) this issue - this is an amazing video as are all your videos. I'm a huge fan! Thanks.
Tank you for the video ! I have one question. Considering that there is only one virtual measure e_{i,j}. Given that other poses do not appear in the error, how can these other positions be optimized as shown at 1:01:27 ?
With only one relative measurement, you constrain only two poses relative to each other. You will not fix them globally. For that you need an external reference or set your initial pose tu something arbitrary. Otherwise, your H matrix will have rank deficiency (a gauge freedom).
so the main difference between graph-SLAM and EKF-SLAM is that graph-SLAM is not assuming markov property and allows updating old estimation upon receiving new information?
No, not really. The LS solution can relinearize, EKF not. That is the main difference in its basic form. Advanced LS (eg robust estimators) can however do more…
You are literally saving my life with these contents.
Great introduction on graph based slam. Refreshing to see high quality content about this topic accessible on youtube. Highly appreciated!
The quality of this video and audio is crystal clear. Thank you for sharing these interesting knowledge.
Also the quality of explanations
Thank you Cyrill for teaching this and providing it for free. Also explained well.
Thank you, your lectures are amazing!
Best video on this topic. Thank you, Cyrill.
I had to stick it through until the end until things cleared up for me, but great lecture. Thanks!
Thanks for the High quality content. I really respect the effort you have taken to create these lectures. Thank you once again.
Thank you
Thanks for sharing your knowledge on Pose Graphs.
Thank you so much for continuing to share such high quality education. This is by far some of the best content on SLAM available.
Thanks!
Thank you for sharing this presentation.
Awesome videos! Thank you very much.
So much wisdom in these videos. Bedtime entertainment,
Awesome video! Thanks for all the great explanation. It would be great if you could talk about marginalization of the Hessian Matrix in a sliding window method.
Thank you sir for sharing such wonderful lectures.
best regards from algeria thank you
Prof. Cyrill, many thanks for uploading this wonderful set of videos. If I may, I would like to ask a few questions (responses from the audience are also well appreciated):
a) I found it hard to understand the physical meaning of matrix H, hence it feels the addition of 1 in H11 to fix x1 somehow arbitrary to me. Although I am not sure, it could potentially be possible to fix x1 by introducing a fictitious measurement such as: e11=z1 - x1; where z1=0 (to fix it to zero) and Ω11 a very high number (infinity?) as we want to provide confidence about this measurement. However, by doing so we also add a b11 term (not only H11), which seems quite different from what has been described in the video.
b) My understanding says that the optimization algorithm used is called Newton's method (Newton-Raphson), whereas here in the slides I see it is named as Gauss-Newton method. I am mistaken?
c) I was trying to find this t2v Matlab function but did not really manage - could anyone provide a link (or any other source of information) where I can find what it does and how it does it?
d) regarding the mapping, does the vehicle store all the measurements and re-generates the map from scratch every time the optimization step is performed?
I am sorry for the quite long comment. Thank you.
Thank you for making and sharing these videos. I can understand the measurement Zij-1 in error function , but curious about how can we get xi-1xj , is it random or deduce from loop-closing?
Thanks for the great Video and Series! one question I have is related to the definition of the error funtion, in the pose-graph setting: it is the norm of the residual rotation + translation. but it seems that specifiying a covariance is a must since rotation and translation are of different units. R there good examples/ mechanism to construct the covariance?
Thank you for these amazing videos.
Could I find by any chanche the algorithm
for ORB-SLAM ?
Thanks for the video. But I have a small question. In the example that you have given you have calculated the actual edge as distance between two nodes. What should be the criteria for 2-D or 3-D nodes?
Thank you for making these videos publicly available. Awesome lectures. I was wondering if there is a way to access the homework too?
Homeworks are part of the exam admission process in Bonn and thus sharing them a bit tricky
Thank you.
Thanks so much for sharing.
The illustration of the pose graph with two poses helps me to understand this specific case. But this is related to the moment relocalization is detected, right? So the graph says you are in x, but the measurements of the loop closure say you should be around "omega". And the difference in these positions is the error, right? But in this case, due to the relocalization, I can know where I should be, but how can I find the error for the other previous pose estimations?
Thanks in advance!
This content is gold.
Thanks for sharing this amazing content.
Thanks
In video at 29.50, there is an explanation as follows "There is a difference between what observation tells me and what the current graph configuration tells me".
Where did this current graph configuration come from ? How to derive/calculate/arrive at this current graph configuration ?
Excuse me could you explain me abou the H matrix and how it os build?
For detailed mathematical steps, please see this lecture -
ua-cam.com/video/FhwFyA0NQkE/v-deo.html
Thanks for your comment! This link helped me a lot :)
Thansk !!
Can you recommend how to update the map after optimizing the graph? Update the current node or all nodes in the graph. Thank you so much.
awesome video!
Thanks!
Thank you for your course.
As the error function has always the same form between two nodes, do Jacobians are always the same ? and do they takes only 1 or -1 values ? It's hard to understand this mathematic abstraction.
No, they can vary for every measurement. And they are different from -1 and 1 in nearly all real-world cases (if the functions are non-linear)
Thanks for the video. Do you have extra resources about information matrix and what is it and how to compute it?
The information matrix from the observation basically tells you how good (precise) your sensor is. Either the sensor has specs (precision of the depth reading for example) or it depends on how accurately you can determine the orientation to the object, which may relate directly to image/feature points that you extract.
Hi Cyrill! Great lecture! I have two minor questions: 1. Are there any thinkable benefits to using this method prior to having achieved the first loop closure? 2. Assuming I have finished the mapping process, do you have a reference on how to use the pose graph for pure localization? Or does one usually just use the corrected map for things such as MCL?
I have thought a little bit about these questions. and realised that:
1- I need to check more for question 1
2- Pose graph and MCL or KF or others are just correction tools for the localization purpose. the main function of the localization is the motion tracking from the state t-1 and t which can be done using The ICP or NDT algorithm or any motion tracking algorithm (registration for the lidar scan and the matching camera's images) this is what I understand. Plz, feel free to correct if you have more information.
Hello, Thanks for the great lectures! one question... how is it that e_{i,j} is not a matrix but a vector instead?
Because e_ij is the error vector (difference between what you should measure and what you actually measured) and not a matrix
Truly great content. I enjoy the "light bulb" moments with the new content 2.0 "5 Minutes with Cyrill". I can only imagine the model for learning on the content 3.0? Maybe paid online access to seminars (1 day class) for in depth solving these equations for each of the topics. Keep going!
Hey Cyrill, thanks for the great materials you post on UA-cam! I have a question regarding the slide "Building the linear system" (50:22) as well as the consecutive "Algorithm" (51:38). Do the indices i,j at the matrix H bar and vector b bar correspond to matrix/vector indices or are these completely different matrices/vectors? And what exactly is the form of those matrices H_bar and b_bar? As I understand, A_ij and B_ij are matrices themselves, thus H_ij bar is a matrix and b_ij a vector themselves.
Furthermore, I'd like to understand, how exactly we obtain the matrix H and the vector b on the latter slide, given the matrices H_ij bar and vector b_ij bar. Are these the same?
Greatly appreciate your help!
Yes ij are indices in the matrices/vectors.
Hi Prof. Stachniss. Another great video. You've taught me a lot. I do have 1 question. In the 1D problem when you need to "fix" the first x1 since everything is relative - clearly H12 is rank deficient since it is the outer product of a 2D vector. The rank has to be 1. This, as you said, is due to the relative nature of the update. Then you add (matlab notation) [1 0 ; 0 0] to H. Can you explain why this is equivalent to fixing the first node to some value? I see that indeed in the end delta_x = (0 1) therefore the first node is not changing positions. How did you know to add [1 0; 0 0] to H to achieve this? My only inclination here is to say that for any [a; b] [1 0; 0 0] * [a ; b] = [a ; 0] (again in matlab notation) so that indeed "a" remains the same. Is this the rationale?
Beyond that (and even including) this issue - this is an amazing video as are all your videos. I'm a huge fan! Thanks.
Hello Jeffrey, by any chance were you able to understand the reason for it?
Hi @Jeffrey! A great question! If you found an answer, please let us know.
Tank you for the video ! I have one question. Considering that there is only one virtual measure e_{i,j}. Given that other poses do not appear in the error, how can these other positions be optimized as shown at 1:01:27 ?
With only one relative measurement, you constrain only two poses relative to each other. You will not fix them globally. For that you need an external reference or set your initial pose tu something arbitrary. Otherwise, your H matrix will have rank deficiency (a gauge freedom).
so the main difference between graph-SLAM and EKF-SLAM is that graph-SLAM is not assuming markov property and allows updating old estimation upon receiving new information?
No, not really. The LS solution can relinearize, EKF not. That is the main difference in its basic form. Advanced LS (eg robust estimators) can however do more…
@@CyrillStachniss so if the system is completely linear, the two methods would give us the same result?
Hello prof, where can i find the homework assignments?
Send me an email
Mailed, thanks for the reply professor
i wonder why are they called constrains if they are basically meant to be violated
Comes from soft constraint. It is not a hard constraint as in CSPs
too much foreplay