Graph-based SLAM using Pose Graphs (Cyrill Stachniss)

Поділитися
Вставка

КОМЕНТАРІ • 66

  • @444haluk
    @444haluk 3 роки тому +2

    You are literally saving my life with these contents.

  • @CoinedBeatz
    @CoinedBeatz 4 роки тому +8

    Great introduction on graph based slam. Refreshing to see high quality content about this topic accessible on youtube. Highly appreciated!

  • @h2o11h2o
    @h2o11h2o 4 роки тому +32

    The quality of this video and audio is crystal clear. Thank you for sharing these interesting knowledge.

    • @SohailSiadat
      @SohailSiadat 9 місяців тому

      Also the quality of explanations

  • @SohailSiadat
    @SohailSiadat 8 місяців тому +1

    Thank you Cyrill for teaching this and providing it for free. Also explained well.

  • @eyalfriedman5972
    @eyalfriedman5972 3 роки тому +5

    Thank you, your lectures are amazing!

  • @dimitrihaas
    @dimitrihaas 2 роки тому +1

    Best video on this topic. Thank you, Cyrill.

  • @dave4148
    @dave4148 3 роки тому +1

    I had to stick it through until the end until things cleared up for me, but great lecture. Thanks!

  • @jeffrey-antony
    @jeffrey-antony 2 роки тому

    Thanks for the High quality content. I really respect the effort you have taken to create these lectures. Thank you once again.

  • @olegzatulovsky5324
    @olegzatulovsky5324 Рік тому +1

    Thanks for sharing your knowledge on Pose Graphs.

  • @aidankennedy6973
    @aidankennedy6973 3 роки тому +12

    Thank you so much for continuing to share such high quality education. This is by far some of the best content on SLAM available.

  • @shravanshravan4402
    @shravanshravan4402 3 роки тому +1

    Thank you for sharing this presentation.

  • @janzim4640
    @janzim4640 2 роки тому +1

    Awesome videos! Thank you very much.

  • @geethanarayanan2896
    @geethanarayanan2896 3 місяці тому +1

    So much wisdom in these videos. Bedtime entertainment,

  • @zftan0709
    @zftan0709 4 роки тому +1

    Awesome video! Thanks for all the great explanation. It would be great if you could talk about marginalization of the Hessian Matrix in a sliding window method.

  • @milingzhang6181
    @milingzhang6181 3 роки тому

    Thank you sir for sharing such wonderful lectures.

  • @hobby_coding
    @hobby_coding 4 роки тому +1

    best regards from algeria thank you

  • @GeorSala213
    @GeorSala213 3 роки тому +2

    Prof. Cyrill, many thanks for uploading this wonderful set of videos. If I may, I would like to ask a few questions (responses from the audience are also well appreciated):
    a) I found it hard to understand the physical meaning of matrix H, hence it feels the addition of 1 in H11 to fix x1 somehow arbitrary to me. Although I am not sure, it could potentially be possible to fix x1 by introducing a fictitious measurement such as: e11=z1 - x1; where z1=0 (to fix it to zero) and Ω11 a very high number (infinity?) as we want to provide confidence about this measurement. However, by doing so we also add a b11 term (not only H11), which seems quite different from what has been described in the video.
    b) My understanding says that the optimization algorithm used is called Newton's method (Newton-Raphson), whereas here in the slides I see it is named as Gauss-Newton method. I am mistaken?
    c) I was trying to find this t2v Matlab function but did not really manage - could anyone provide a link (or any other source of information) where I can find what it does and how it does it?
    d) regarding the mapping, does the vehicle store all the measurements and re-generates the map from scratch every time the optimization step is performed?
    I am sorry for the quite long comment. Thank you.

  • @ztyu-007
    @ztyu-007 4 роки тому

    Thank you for making and sharing these videos. I can understand the measurement Zij-1 in error function , but curious about how can we get xi-1xj , is it random or deduce from loop-closing?

  • @dorielar
    @dorielar 3 роки тому

    Thanks for the great Video and Series! one question I have is related to the definition of the error funtion, in the pose-graph setting: it is the norm of the residual rotation + translation. but it seems that specifiying a covariance is a must since rotation and translation are of different units. R there good examples/ mechanism to construct the covariance?

  • @dealeras2143
    @dealeras2143 2 роки тому +1

    Thank you for these amazing videos.
    Could I find by any chanche the algorithm
    for ORB-SLAM ?

  • @kanumarlapudisahithi5553
    @kanumarlapudisahithi5553 3 роки тому

    Thanks for the video. But I have a small question. In the example that you have given you have calculated the actual edge as distance between two nodes. What should be the criteria for 2-D or 3-D nodes?

  • @Jahaniam
    @Jahaniam 4 роки тому +10

    Thank you for making these videos publicly available. Awesome lectures. I was wondering if there is a way to access the homework too?

    • @CyrillStachniss
      @CyrillStachniss  4 роки тому +5

      Homeworks are part of the exam admission process in Bonn and thus sharing them a bit tricky

  • @antonisvenianakis1047
    @antonisvenianakis1047 3 роки тому

    Thank you.

  • @mirellamelo
    @mirellamelo 3 роки тому

    Thanks so much for sharing.
    The illustration of the pose graph with two poses helps me to understand this specific case. But this is related to the moment relocalization is detected, right? So the graph says you are in x, but the measurements of the loop closure say you should be around "omega". And the difference in these positions is the error, right? But in this case, due to the relocalization, I can know where I should be, but how can I find the error for the other previous pose estimations?
    Thanks in advance!

  • @BrunoSantos-ov1sw
    @BrunoSantos-ov1sw 3 роки тому +2

    This content is gold.

  • @akshayka9454
    @akshayka9454 2 роки тому

    In video at 29.50, there is an explanation as follows "There is a difference between what observation tells me and what the current graph configuration tells me".
    Where did this current graph configuration come from ? How to derive/calculate/arrive at this current graph configuration ?

  • @edissonfabriciocanarortiz3487
    @edissonfabriciocanarortiz3487 4 роки тому

    Excuse me could you explain me abou the H matrix and how it os build?

  • @ShubhamKumarpro1
    @ShubhamKumarpro1 2 роки тому +1

    For detailed mathematical steps, please see this lecture -
    ua-cam.com/video/FhwFyA0NQkE/v-deo.html

    • @SimmySimmy
      @SimmySimmy 2 роки тому

      Thanks for your comment! This link helped me a lot :)

  • @ThibaultNeveu
    @ThibaultNeveu 3 роки тому +1

    Thansk !!

  • @Jrang88
    @Jrang88 Рік тому

    Can you recommend how to update the map after optimizing the graph? Update the current node or all nodes in the graph. Thank you so much.

  • @emerydergro5332
    @emerydergro5332 5 місяців тому +1

    awesome video!

  • @durandthibaud9445
    @durandthibaud9445 4 роки тому

    Thank you for your course.
    As the error function has always the same form between two nodes, do Jacobians are always the same ? and do they takes only 1 or -1 values ? It's hard to understand this mathematic abstraction.

    • @CyrillStachniss
      @CyrillStachniss  4 роки тому +1

      No, they can vary for every measurement. And they are different from -1 and 1 in nearly all real-world cases (if the functions are non-linear)

  • @yousofebneddin7430
    @yousofebneddin7430 4 роки тому +1

    Thanks for the video. Do you have extra resources about information matrix and what is it and how to compute it?

    • @CyrillStachniss
      @CyrillStachniss  4 роки тому +4

      The information matrix from the observation basically tells you how good (precise) your sensor is. Either the sensor has specs (precision of the depth reading for example) or it depends on how accurately you can determine the orientation to the object, which may relate directly to image/feature points that you extract.

  • @florianwirnshofer6814
    @florianwirnshofer6814 4 роки тому

    Hi Cyrill! Great lecture! I have two minor questions: 1. Are there any thinkable benefits to using this method prior to having achieved the first loop closure? 2. Assuming I have finished the mapping process, do you have a reference on how to use the pose graph for pure localization? Or does one usually just use the corrected map for things such as MCL?

    • @anascharoud4540
      @anascharoud4540 2 роки тому

      I have thought a little bit about these questions. and realised that:
      1- I need to check more for question 1
      2- Pose graph and MCL or KF or others are just correction tools for the localization purpose. the main function of the localization is the motion tracking from the state t-1 and t which can be done using The ICP or NDT algorithm or any motion tracking algorithm (registration for the lidar scan and the matching camera's images) this is what I understand. Plz, feel free to correct if you have more information.

  • @asafdahan6811
    @asafdahan6811 4 роки тому

    Hello, Thanks for the great lectures! one question... how is it that e_{i,j} is not a matrix but a vector instead?

    • @CyrillStachniss
      @CyrillStachniss  4 роки тому

      Because e_ij is the error vector (difference between what you should measure and what you actually measured) and not a matrix

  • @bankssurveyors
    @bankssurveyors 4 роки тому +1

    Truly great content. I enjoy the "light bulb" moments with the new content 2.0 "5 Minutes with Cyrill". I can only imagine the model for learning on the content 3.0? Maybe paid online access to seminars (1 day class) for in depth solving these equations for each of the topics. Keep going!

  • @andreaspletschko8404
    @andreaspletschko8404 9 місяців тому

    Hey Cyrill, thanks for the great materials you post on UA-cam! I have a question regarding the slide "Building the linear system" (50:22) as well as the consecutive "Algorithm" (51:38). Do the indices i,j at the matrix H bar and vector b bar correspond to matrix/vector indices or are these completely different matrices/vectors? And what exactly is the form of those matrices H_bar and b_bar? As I understand, A_ij and B_ij are matrices themselves, thus H_ij bar is a matrix and b_ij a vector themselves.
    Furthermore, I'd like to understand, how exactly we obtain the matrix H and the vector b on the latter slide, given the matrices H_ij bar and vector b_ij bar. Are these the same?
    Greatly appreciate your help!

    • @CyrillStachniss
      @CyrillStachniss  8 місяців тому

      Yes ij are indices in the matrices/vectors.

  • @jeffreydanowitz3083
    @jeffreydanowitz3083 3 роки тому +3

    Hi Prof. Stachniss. Another great video. You've taught me a lot. I do have 1 question. In the 1D problem when you need to "fix" the first x1 since everything is relative - clearly H12 is rank deficient since it is the outer product of a 2D vector. The rank has to be 1. This, as you said, is due to the relative nature of the update. Then you add (matlab notation) [1 0 ; 0 0] to H. Can you explain why this is equivalent to fixing the first node to some value? I see that indeed in the end delta_x = (0 1) therefore the first node is not changing positions. How did you know to add [1 0; 0 0] to H to achieve this? My only inclination here is to say that for any [a; b] [1 0; 0 0] * [a ; b] = [a ; 0] (again in matlab notation) so that indeed "a" remains the same. Is this the rationale?
    Beyond that (and even including) this issue - this is an amazing video as are all your videos. I'm a huge fan! Thanks.

    • @abhishekgoel1687
      @abhishekgoel1687 2 роки тому

      Hello Jeffrey, by any chance were you able to understand the reason for it?

    • @maciejtrzeciak9249
      @maciejtrzeciak9249 2 роки тому

      Hi @Jeffrey! A great question! If you found an answer, please let us know.

  • @kevinr3798
    @kevinr3798 3 місяці тому

    Tank you for the video ! I have one question. Considering that there is only one virtual measure e_{i,j}. Given that other poses do not appear in the error, how can these other positions be optimized as shown at 1:01:27 ?

    • @CyrillStachniss
      @CyrillStachniss  3 місяці тому +1

      With only one relative measurement, you constrain only two poses relative to each other. You will not fix them globally. For that you need an external reference or set your initial pose tu something arbitrary. Otherwise, your H matrix will have rank deficiency (a gauge freedom).

  • @oldcowbb
    @oldcowbb 2 роки тому

    so the main difference between graph-SLAM and EKF-SLAM is that graph-SLAM is not assuming markov property and allows updating old estimation upon receiving new information?

    • @CyrillStachniss
      @CyrillStachniss  2 роки тому +1

      No, not really. The LS solution can relinearize, EKF not. That is the main difference in its basic form. Advanced LS (eg robust estimators) can however do more…

    • @oldcowbb
      @oldcowbb 2 роки тому

      @@CyrillStachniss so if the system is completely linear, the two methods would give us the same result?

  • @pavancherukuri2824
    @pavancherukuri2824 3 місяці тому

    Hello prof, where can i find the homework assignments?

  • @oldcowbb
    @oldcowbb 2 роки тому

    i wonder why are they called constrains if they are basically meant to be violated

    • @CyrillStachniss
      @CyrillStachniss  2 роки тому +1

      Comes from soft constraint. It is not a hard constraint as in CSPs

  • @rcr19940
    @rcr19940 2 роки тому +1

    too much foreplay