Sharing knowledge is the best thing you can do. Most of us will not have the chance to have access to great teachers like Cyrill. The explanations are neat and easily understandable. Thank you very much Cyrill
This not only provided me with everything I needed to know about this subject for my thesis, it was also really really enjoyable to watch! Edit: Could the solution matrix R be a reflection matrix? As U or V may be.
@41 mins, on slide 41 I think there is no reason for V*sqrt(D) to be PD, for example V can be a 180deg rotation and D the identity matrix and surely x^T(V*sqrt(D))x < 0. The matrix that should be PD is AA^T, a fact that has nothing to do with the properties of SVD. Thanks for the amazing videos!
Hi David. Yes, you are absolutely right, the reason I have mentioned is not correct. Arun et al (1987) explains it in “Least-Squares Fitting of Two 3D Point Sets.” : As long as we talk about rotations for the transformation, A is positive definite. For deflections, however, that would not be the case. So the strategy should be: Either I know we have just rotations in the game and then I can assume it to be pos. definite. If not, however, I need to check that my R computed via SDV is positive definite. If it is not, I know it is a deflection (or a degenerate case). Thanks for pointing out that my explanation was not correct. Hope that helps.
Thanks for the incredible lecture! Just a slight question, in what ways is this different from the Kabsch-Umeyama algorithm described by Jim Larence et. Al. in "A Purely Algebraic Justification of the Kabsch-Umeyama Algorithm"? Or is this a different approach to arrive at that same algorithm?
hi cyrill, i am working on IMD DATA , is it useful in flood mapping over the years , and how it works in association of year wise data of same region over 30- 50 years. please reply its urge
Hi, I'm really thanks for your great lecture. BTW, I'm not sure about your Note. According to your Note, when i defined an = (xn - x0), bn = (yn - y0), R = UV^T is the right equation. Is my understanding correct? Then, how can get tr(RH) = tr(VDV^T) ? Can you explain more about it? Thanks,
Very very useful lecture, Thank you. I just wanted to make sure that if we will get translation and rotation vectors for the whole surface? Or for each correspondence point?
please correct the the slide number 41 it should be (VD^(1/2))^T not (D^(1/2)V)^T why the optimal solution hold only for Singular value decomposition? The similar result can be shown using eigenvalue decomposition (spectral decomposition).
You may be able to come up with an optimal solution differently. But if we have one way to compute an optimal solution (= i.e., there is no better one, only maybe an equally good one), it is fine to stick to it.
34:38 on slide 37, the definition of an = (yn-y0) and bn = (xn-x0), R* = argmax(sum (bn^T R an) pn) is backwards from the definition of R* on the previous slide (argmax sum (yn-y0)^T R (xn-x0) pn) which changes the meaning of R and I'm not sure which definition is correct for solving the translation vector.
I agree in principle it doesn't really matter whether it's R or R^T here and whether x and y or a and b are switched (it's just a distance); the only thing that matters is whether R is consistent with the definition of the translation vector, which I haven't checked but I'm sure it's fine; I just wanted to clarify. BTW, I really appreciate that you post these -- I learn a lot from these lectures and I don't mean to sound like I'm complaining!
You raised an absolutely valid point, my slides are inconsistent. Either one changes the definition of a_n and b_n or on transposed the matrix R (how it is composed out of the SVD). In the literature, both variants are used, either changing a_n and b_n or using R = UV^T or as R = VU^T. Sorry for this mistake.
On the second-to-last bullet point of the *Simplified correspondences* slide at 8:55, shouldn't it say *"So that the point set { \overline{x}_n } will be as close as possible to the point set { y_n }"* ? The original (reordered) point set { x_n } has a fixed MSE with regard to its distance from { y_n }, but we want to minimise the MSE of the *transformed* set { \overline{x}_n }, right?
Extrinsic calibration: Assuming I have sets point clouds of rigid connected sensors with roughly 30-50% overlapping FoV, is this algorithm able to estimate a extrinsic calibration? Is an initial guess mandatory or only increases the speed?
The limited overlap is no problem at all if you know the data association (which this videos assumes). A limited overlap for unknown DA is not too much of an the issues, as long as you have a good initial guess for the data association OR an initial guess for the transformation so that the guessed data is not too bad.
@@CyrillStachniss Sir , your lectures are very good to understand the concept . Can you share your email or let me know where i can find it . Thanks again sir
@@CyrillStachniss ok sir, but how are you giving the array of magnitude, are just using some features as the focus point and automatically giving the higher magnitude of p sir?
老师讲得特别细致,在国内几乎没有这样细致的讲解,非常感谢你,让我解决了困扰我很久的概念问题
Does anyone on the planet convey remote sensing knowledge better than this man? I think not.
Thanks ;-)
Much better than my PI, totally agree XD
Sharing knowledge is the best thing you can do. Most of us will not have the chance to have access to great teachers like Cyrill.
The explanations are neat and easily understandable. Thank you very much Cyrill
Thanks
This is a very insightful lecture Cyrill. You are the best; you make difficult concepts easy in a short video.
Great lecture!!!! I'm dealing with an ICP algorithm for my thesis. Thanks for the lecture!!!
Thank you so much for this lecture, you are my idol!; This lecture is like superman on execution
The lecture is very clear. Thank you professor
Such a great lecture! Thank you
This not only provided me with everything I needed to know about this subject for my thesis, it was also really really enjoyable to watch!
Edit: Could the solution matrix R be a reflection matrix? As U or V may be.
best way to spend spring break
@41 mins, on slide 41 I think there is no reason for V*sqrt(D) to be PD, for example V can be a 180deg rotation and D the identity matrix and surely x^T(V*sqrt(D))x < 0. The matrix that should be PD is AA^T, a fact that has nothing to do with the properties of SVD.
Thanks for the amazing videos!
Hi David. Yes, you are absolutely right, the reason I have mentioned is not correct. Arun et al (1987) explains it in “Least-Squares Fitting of Two 3D Point Sets.” : As long as we talk about rotations for the transformation, A is positive definite. For deflections, however, that would not be the case. So the strategy should be: Either I know we have just rotations in the game and then I can assume it to be pos. definite. If not, however, I need to check that my R computed via SDV is positive definite. If it is not, I know it is a deflection (or a degenerate case). Thanks for pointing out that my explanation was not correct. Hope that helps.
Great lecture! Thanks Cyrill
That was a great lecture, thanks professor.
You are the absolute best!
Great lecture!!! Thanks for sharing!
Thanks for the incredible lecture! Just a slight question, in what ways is this different from the Kabsch-Umeyama algorithm described by Jim Larence et. Al. in "A Purely Algebraic Justification of the Kabsch-Umeyama Algorithm"? Or is this a different approach to arrive at that same algorithm?
c# code real time programming and explain was greater when this, but its still good.
Why is R computed like V*U(T) on slide 13 and U*V(T) on slide 17? Thank you
hi cyrill, i am working on IMD DATA , is it useful in flood mapping over the years , and how it works in association of year wise data of same region over 30- 50 years. please reply its urge
Is there any literature to read more about this and related stuff? Very very grateful for your lecture Prof, very very well explained!
Arun et al. in PAMI 1987: "Least-squares fitting of two 3-D point sets" - that is where the magic comes from...
@@CyrillStachniss thank you so much for your reply :) will read more into it
Hi, I'm really thanks for your great lecture.
BTW, I'm not sure about your Note.
According to your Note, when i defined an = (xn - x0), bn = (yn - y0),
R = UV^T is the right equation.
Is my understanding correct?
Then, how can get tr(RH) = tr(VDV^T) ?
Can you explain more about it?
Thanks,
Hello guys, lectures on camera lidar calibration... anything planned for this?!
Despite being an interesting problem, it is mot right now a part of the curriculum.
@@CyrillStachniss Yeah ok thank you professor :-)
Very very useful lecture, Thank you.
I just wanted to make sure that if we will get translation and rotation vectors for the whole surface? Or for each correspondence point?
You get a single translation vector and a single rotation based on all correspondences considered jointly. I hope that answers your question.
please correct the the slide number 41 it should be (VD^(1/2))^T not (D^(1/2)V)^T
why the optimal solution hold only for Singular value decomposition? The similar result can be shown using eigenvalue decomposition (spectral decomposition).
You may be able to come up with an optimal solution differently. But if we have one way to compute an optimal solution (= i.e., there is no better one, only maybe an equally good one), it is fine to stick to it.
34:38 on slide 37, the definition of an = (yn-y0) and bn = (xn-x0), R* = argmax(sum (bn^T R an) pn) is backwards from the definition of R* on the previous slide (argmax sum (yn-y0)^T R (xn-x0) pn) which changes the meaning of R and I'm not sure which definition is correct for solving the translation vector.
@@CyrillStachniss But rotation matrices are anti-symmetric (R R^T = I)
I agree in principle it doesn't really matter whether it's R or R^T here and whether x and y or a and b are switched (it's just a distance); the only thing that matters is whether R is consistent with the definition of the translation vector, which I haven't checked but I'm sure it's fine; I just wanted to clarify.
BTW, I really appreciate that you post these -- I learn a lot from these lectures and I don't mean to sound like I'm complaining!
I will go back to the derivation and check again
You raised an absolutely valid point, my slides are inconsistent. Either one changes the definition of a_n and b_n or on transposed the matrix R (how it is composed out of the SVD). In the literature, both variants are used, either changing a_n and b_n or using R = UV^T or as R = VU^T. Sorry for this mistake.
On the second-to-last bullet point of the *Simplified correspondences* slide at 8:55, shouldn't it say *"So that the point set { \overline{x}_n } will be as close as possible to the point set { y_n }"* ? The original (reordered) point set { x_n } has a fixed MSE with regard to its distance from { y_n }, but we want to minimise the MSE of the *transformed* set { \overline{x}_n }, right?
Correct, that is my mistake. It should be the transformed set { \overline{x}_n }. Sorry for that!
All good, thank you for the feedback! :)
Excelent!
Very informative video as always. May you also share the name of author or research paper of this method ?
Extrinsic calibration: Assuming I have sets point clouds of rigid connected sensors with roughly 30-50% overlapping FoV, is this algorithm able to estimate a extrinsic calibration? Is an initial guess mandatory or only increases the speed?
I comment just in case someone answer so I got notification
The limited overlap is no problem at all if you know the data association (which this videos assumes). A limited overlap for unknown DA is not too much of an the issues, as long as you have a good initial guess for the data association OR an initial guess for the transformation so that the guessed data is not too bad.
Thanks for the great lecture, Would it be possible for me to get the slides for this lecture?
Yes, please send me an email.
@@CyrillStachniss Sir , your lectures are very good to understand the concept . Can you share your email or let me know where i can find it . Thanks again sir
Hi sir , what is the intuition of adding weight to coordinates, what does that magnitude means? Sir
You weight the point differences to put a lower or higher focus on certain point correspondences.
@@CyrillStachniss ok sir, but how are you giving the array of magnitude, are just using some features as the focus point and automatically giving the higher magnitude of p sir?