ICP & Point Cloud Registration - Part 1: Known Data Association & SVD (Cyrill Stachniss, 2021)

Поділитися
Вставка
  • Опубліковано 20 вер 2024

КОМЕНТАРІ • 53

  • @dawn123fm
    @dawn123fm 2 місяці тому +3

    老师讲得特别细致,在国内几乎没有这样细致的讲解,非常感谢你,让我解决了困扰我很久的概念问题

  • @matthewwithum8372
    @matthewwithum8372 3 роки тому +7

    Does anyone on the planet convey remote sensing knowledge better than this man? I think not.

  • @bithigh8301
    @bithigh8301 7 місяців тому +1

    Sharing knowledge is the best thing you can do. Most of us will not have the chance to have access to great teachers like Cyrill.
    The explanations are neat and easily understandable. Thank you very much Cyrill

  • @georgebethel1320
    @georgebethel1320 2 роки тому +6

    This is a very insightful lecture Cyrill. You are the best; you make difficult concepts easy in a short video.

  • @maryG128
    @maryG128 Рік тому +1

    Great lecture!!!! I'm dealing with an ICP algorithm for my thesis. Thanks for the lecture!!!

  • @Shaharsarshalom1
    @Shaharsarshalom1 Рік тому +1

    Thank you so much for this lecture, you are my idol!; This lecture is like superman on execution

  • @linray707
    @linray707 8 місяців тому +1

    The lecture is very clear. Thank you professor

  • @antoniogarest7516
    @antoniogarest7516 2 роки тому +5

    Such a great lecture! Thank you

  • @WasserundLuft
    @WasserundLuft 2 роки тому +2

    This not only provided me with everything I needed to know about this subject for my thesis, it was also really really enjoyable to watch!
    Edit: Could the solution matrix R be a reflection matrix? As U or V may be.

  • @oldcowbb
    @oldcowbb 3 роки тому +2

    best way to spend spring break

  • @davidarnon2775
    @davidarnon2775 3 роки тому +3

    @41 mins, on slide 41 I think there is no reason for V*sqrt(D) to be PD, for example V can be a 180deg rotation and D the identity matrix and surely x^T(V*sqrt(D))x < 0. The matrix that should be PD is AA^T, a fact that has nothing to do with the properties of SVD.
    Thanks for the amazing videos!

    • @CyrillStachniss
      @CyrillStachniss  3 роки тому +3

      Hi David. Yes, you are absolutely right, the reason I have mentioned is not correct. Arun et al (1987) explains it in “Least-Squares Fitting of Two 3D Point Sets.” : As long as we talk about rotations for the transformation, A is positive definite. For deflections, however, that would not be the case. So the strategy should be: Either I know we have just rotations in the game and then I can assume it to be pos. definite. If not, however, I need to check that my R computed via SDV is positive definite. If it is not, I know it is a deflection (or a degenerate case). Thanks for pointing out that my explanation was not correct. Hope that helps.

  • @francocipollone4679
    @francocipollone4679 2 роки тому +1

    Great lecture! Thanks Cyrill

  • @majidseydgar8643
    @majidseydgar8643 3 роки тому +2

    That was a great lecture, thanks professor.

  • @harrypotter1155
    @harrypotter1155 2 роки тому

    You are the absolute best!

  • @longfeizhang4510
    @longfeizhang4510 Рік тому

    Great lecture!!! Thanks for sharing!

  • @skodsrs9453
    @skodsrs9453 Рік тому

    Thanks for the incredible lecture! Just a slight question, in what ways is this different from the Kabsch-Umeyama algorithm described by Jim Larence et. Al. in "A Purely Algebraic Justification of the Kabsch-Umeyama Algorithm"? Or is this a different approach to arrive at that same algorithm?

  • @snouzz-gaming
    @snouzz-gaming 2 роки тому

    c# code real time programming and explain was greater when this, but its still good.

  • @lirec6
    @lirec6 2 роки тому +1

    Why is R computed like V*U(T) on slide 13 and U*V(T) on slide 17? Thank you

  • @venkatalakshmi7772
    @venkatalakshmi7772 9 днів тому

    hi cyrill, i am working on IMD DATA , is it useful in flood mapping over the years , and how it works in association of year wise data of same region over 30- 50 years. please reply its urge

  • @ilhamwicaksono5802
    @ilhamwicaksono5802 3 роки тому +2

    Is there any literature to read more about this and related stuff? Very very grateful for your lecture Prof, very very well explained!

    • @CyrillStachniss
      @CyrillStachniss  3 роки тому +4

      Arun et al. in PAMI 1987: "Least-squares fitting of two 3-D point sets" - that is where the magic comes from...

    • @ilhamwicaksono5802
      @ilhamwicaksono5802 3 роки тому

      @@CyrillStachniss thank you so much for your reply :) will read more into it

  • @김태영-s2e9k
    @김태영-s2e9k 3 роки тому +1

    Hi, I'm really thanks for your great lecture.
    BTW, I'm not sure about your Note.
    According to your Note, when i defined an = (xn - x0), bn = (yn - y0),
    R = UV^T is the right equation.
    Is my understanding correct?
    Then, how can get tr(RH) = tr(VDV^T) ?
    Can you explain more about it?
    Thanks,

  • @barath_
    @barath_ 3 роки тому +3

    Hello guys, lectures on camera lidar calibration... anything planned for this?!

    • @CyrillStachniss
      @CyrillStachniss  3 роки тому

      Despite being an interesting problem, it is mot right now a part of the curriculum.

    • @barath_
      @barath_ 3 роки тому

      @@CyrillStachniss Yeah ok thank you professor :-)

  • @mahaaforoughnia9076
    @mahaaforoughnia9076 3 роки тому

    Very very useful lecture, Thank you.
    I just wanted to make sure that if we will get translation and rotation vectors for the whole surface? Or for each correspondence point?

    • @CyrillStachniss
      @CyrillStachniss  3 роки тому +2

      You get a single translation vector and a single rotation based on all correspondences considered jointly. I hope that answers your question.

  • @sujandhali2483
    @sujandhali2483 2 роки тому

    please correct the the slide number 41 it should be (VD^(1/2))^T not (D^(1/2)V)^T
    why the optimal solution hold only for Singular value decomposition? The similar result can be shown using eigenvalue decomposition (spectral decomposition).

    • @CyrillStachniss
      @CyrillStachniss  2 роки тому +1

      You may be able to come up with an optimal solution differently. But if we have one way to compute an optimal solution (= i.e., there is no better one, only maybe an equally good one), it is fine to stick to it.

  • @a1k0n
    @a1k0n 3 роки тому +2

    34:38 on slide 37, the definition of an = (yn-y0) and bn = (xn-x0), R* = argmax(sum (bn^T R an) pn) is backwards from the definition of R* on the previous slide (argmax sum (yn-y0)^T R (xn-x0) pn) which changes the meaning of R and I'm not sure which definition is correct for solving the translation vector.

    • @a1k0n
      @a1k0n 3 роки тому

      @@CyrillStachniss But rotation matrices are anti-symmetric (R R^T = I)

    • @a1k0n
      @a1k0n 3 роки тому

      I agree in principle it doesn't really matter whether it's R or R^T here and whether x and y or a and b are switched (it's just a distance); the only thing that matters is whether R is consistent with the definition of the translation vector, which I haven't checked but I'm sure it's fine; I just wanted to clarify.
      BTW, I really appreciate that you post these -- I learn a lot from these lectures and I don't mean to sound like I'm complaining!

    • @CyrillStachniss
      @CyrillStachniss  3 роки тому

      I will go back to the derivation and check again

    • @CyrillStachniss
      @CyrillStachniss  3 роки тому +5

      You raised an absolutely valid point, my slides are inconsistent. Either one changes the definition of a_n and b_n or on transposed the matrix R (how it is composed out of the SVD). In the literature, both variants are used, either changing a_n and b_n or using R = UV^T or as R = VU^T. Sorry for this mistake.

  • @ChrisOffner
    @ChrisOffner 2 роки тому +1

    On the second-to-last bullet point of the *Simplified correspondences* slide at 8:55, shouldn't it say *"So that the point set { \overline{x}_n } will be as close as possible to the point set { y_n }"* ? The original (reordered) point set { x_n } has a fixed MSE with regard to its distance from { y_n }, but we want to minimise the MSE of the *transformed* set { \overline{x}_n }, right?

    • @CyrillStachniss
      @CyrillStachniss  2 роки тому +1

      Correct, that is my mistake. It should be the transformed set { \overline{x}_n }. Sorry for that!

    • @ChrisOffner
      @ChrisOffner 2 роки тому

      All good, thank you for the feedback! :)

  • @marlonmalheiros8395
    @marlonmalheiros8395 2 роки тому

    Excelent!

  • @hetshah5260
    @hetshah5260 2 роки тому

    Very informative video as always. May you also share the name of author or research paper of this method ?

  • @nilshamacher5064
    @nilshamacher5064 3 роки тому +1

    Extrinsic calibration: Assuming I have sets point clouds of rigid connected sensors with roughly 30-50% overlapping FoV, is this algorithm able to estimate a extrinsic calibration? Is an initial guess mandatory or only increases the speed?

    • @ilhamwicaksono5802
      @ilhamwicaksono5802 3 роки тому

      I comment just in case someone answer so I got notification

    • @CyrillStachniss
      @CyrillStachniss  3 роки тому +1

      The limited overlap is no problem at all if you know the data association (which this videos assumes). A limited overlap for unknown DA is not too much of an the issues, as long as you have a good initial guess for the data association OR an initial guess for the transformation so that the guessed data is not too bad.

  • @shettyrockstarhendrx
    @shettyrockstarhendrx 3 роки тому

    Thanks for the great lecture, Would it be possible for me to get the slides for this lecture?

    • @CyrillStachniss
      @CyrillStachniss  3 роки тому

      Yes, please send me an email.

    • @maheshmohite5791
      @maheshmohite5791 2 роки тому

      @@CyrillStachniss Sir , your lectures are very good to understand the concept . Can you share your email or let me know where i can find it . Thanks again sir

  • @hariharanramamurthy9946
    @hariharanramamurthy9946 Рік тому

    Hi sir , what is the intuition of adding weight to coordinates, what does that magnitude means? Sir

    • @CyrillStachniss
      @CyrillStachniss  Рік тому

      You weight the point differences to put a lower or higher focus on certain point correspondences.

    • @hariharanramamurthy9946
      @hariharanramamurthy9946 Рік тому

      @@CyrillStachniss ok sir, but how are you giving the array of magnitude, are just using some features as the focus point and automatically giving the higher magnitude of p sir?