Shouldn't the right hand side of the matrix form of the equation in 12:02 be one column vector? The ] [ brackets between the transposed vectors and r, s shouldn't be there.
Well spotted. This is a mistake on the slides. In the right hand side of the lower equations the "] [" must be removed, otherwise we would not get the desired 2D vector. Thanks for pointing out this mistake.
I guess the equations in 9:01 are incorrect.. Even the professor Cyrill indicated that the equation (f-g)⋅r should be (g-f)⋅r, the following equations are weird. I guess the following equations would be (q + μs - p - λr)⋅s = 0 and (q + μs - p - λr)⋅r = 0.
You're right. Although the left vector should still be (f-g), to be consistent with the two equations in slide 11 with the real parameters are substituted in. Those two equations are correct. The left vectors in the equations in slide 9 should've been (f - g) = (p+λr - (q+μs))
@CyrillStachniss Thanks for the great video professor. I have a question on quality of triangulation? Is there a way I can estimate the uncertainty or the covariance matrix of the triangulated point? The lines may not perfectly intersect (due to noise in relative poses) and the pixel sizes could define a larger unprojected area. Is there any source where I can learn about how I can encode this uncertainty as a covariance matrix? You do show this for the 2 view case, is there a way to estimate this for the multiview case?
Another question I had is about hand eye calibration. I've tried to capture images from a pattern and the same time record the position of the robot , however I was expected to get one fix result but it's not the case! obviously the transformation between the robot and camera coordinate system is fixed but can slightly be different I think in x element of translation bc it depends on focal length! I've tried to capture images in a range(movements 1-4cm from the pattern) but the estimated transformation seems to have the best result for the middle of the range! would you please shed some light on this? I cannot end up with an estimated transformation can have good results in different distances!
Thank you for the great lecture. I think that Matlab implementation for triangulation use SVD which is a linear solution for that, do you know any other implementation that offers non-linear solution for triangulation and you've used it in your lab maybe?!
I have a question on "Absolute orientation". If we can estimate 3D points from stereo camera (=we know the baseline) and control points w.r.t. global frame, then we don't need to estimate "scale" parameter right? In this case, 6 DoF?
D. Cyrill, Does this algorithm is used to generate the DSM ( dense surface model ) as a point cloud ? if not, which one does the photogrammetric software such as PhotoModeler use ?
Here you mention that you get the 3D points in the local frame ua-cam.com/video/UZlRhEUWSas/v-deo.html. But however we don't have the scale information from the essential estimation until we get the control points. Am I missing something ?
For the photogrammetric model, we do not have the scale. If we use a stereo setup with known baseline, we have a good estimate. Thus, it depends on the precise camera setup
thanks for making this nice explanation public and freely accessible
Shouldn't the right hand side of the matrix form of the equation in 12:02 be one column vector? The ] [ brackets between the transposed vectors and r, s shouldn't be there.
Well spotted. This is a mistake on the slides. In the right hand side of the lower equations the "] [" must be removed, otherwise we would not get the desired 2D vector. Thanks for pointing out this mistake.
I almost watch all the videos from Prof. Stachniss. Thank you for your lecture.
I guess the equations in 9:01 are incorrect..
Even the professor Cyrill indicated that the equation (f-g)⋅r should be (g-f)⋅r, the following equations are weird.
I guess the following equations would be (q + μs - p - λr)⋅s = 0 and (q + μs - p - λr)⋅r = 0.
You're right. Although the left vector should still be (f-g), to be consistent with the two equations in slide 11 with the real parameters are substituted in. Those two equations are correct. The left vectors in the equations in slide 9 should've been (f - g) = (p+λr - (q+μs))
Corrections:
11:56 Mistake in the brackets in the last row, right hand side. Remove inner ][
Finally it’s here. Was waiting for this!
@CyrillStachniss Thanks for the great video professor. I have a question on quality of triangulation? Is there a way I can estimate the uncertainty or the covariance matrix of the triangulated point? The lines may not perfectly intersect (due to noise in relative poses) and the pixel sizes could define a larger unprojected area. Is there any source where I can learn about how I can encode this uncertainty as a covariance matrix? You do show this for the 2 view case, is there a way to estimate this for the multiview case?
Again, this comes exactly when i need it 👌
Thank you Cyrill for this streamlined explanation, but can I ask you about the name of the reference or paper you took that from?
Created based on notes by Wolfgang Förstner
very well explained. waiting for a video on sensor fusion of camera images and 3D point cloud
Hey is there any article, journal paper, or book where I can find those explanations, especially the Geometric Solution?
Another question I had is about hand eye calibration. I've tried to capture images from a pattern and the same time record the position of the robot , however I was expected to get one fix result but it's not the case! obviously the transformation between the robot and camera coordinate system is fixed but can slightly be different I think in x element of translation bc it depends on focal length! I've tried to capture images in a range(movements 1-4cm from the pattern) but the estimated transformation seems to have the best result for the middle of the range! would you please shed some light on this? I cannot end up with an estimated transformation can have good results in different distances!
Thank you for the great lecture. I think that Matlab implementation for triangulation use SVD which is a linear solution for that, do you know any other implementation that offers non-linear solution for triangulation and you've used it in your lab maybe?!
How can one find the camera constant c for a real camera during the calibration process. It would be a great help if anyone could answer that
See video on camera calibration (Zhangs method) in my list of videos
I have a question on "Absolute orientation". If we can estimate 3D points from stereo camera (=we know the baseline) and control points w.r.t. global frame, then we don't need to estimate "scale" parameter right? In this case, 6 DoF?
If the baseline is perfect, no. Otherwise a scale correction can be useful
D. Cyrill, Does this algorithm is used to generate the DSM ( dense surface model ) as a point cloud ? if not, which one does the photogrammetric software such as PhotoModeler use ?
Here you mention that you get the 3D points in the local frame ua-cam.com/video/UZlRhEUWSas/v-deo.html. But however we don't have the scale information from the essential estimation until we get the control points. Am I missing something ?
For the photogrammetric model, we do not have the scale. If we use a stereo setup with known baseline, we have a good estimate. Thus, it depends on the precise camera setup
Thank you so much Professor.
I really like these courses but someone can tell me why there are advertizings every 3 to 4 mn ? It is really annoying and was not the case before...
Sorry for that. I now disabled Midroll Ads
@@CyrillStachniss Thank you very much for your kindness
Amazing!!!
this lecture I didnt like. many points in it was not clear enough as they were in other lectures
Don't