Skip Lists

Поділитися
Вставка
  • Опубліковано 10 січ 2025

КОМЕНТАРІ • 66

  • @orelyosef5060
    @orelyosef5060 2 роки тому +18

    Facinating data structure really, the combination of the benefits of the 2 most popular structures, an array and a linked list.

  • @Abhay.Bhandari
    @Abhay.Bhandari 3 роки тому +15

    Its a great lecture. You have taught it so well and no one else have taught like this. Teaching is an art and you are the great artist.

  • @aldrinseanpereira140
    @aldrinseanpereira140 3 роки тому +8

    I just found this channel by searching for amortised analysis for my college course... your style of presenting the subject by slides and the teaching is a life-saver! THANK YOU!!!
    i really hope you gain more views and subs so all students can benefit

  • @evasionlan
    @evasionlan 3 роки тому +10

    easy to understand and also deep enough, quiz are great too. Thank you!

    • @algorithmslab
      @algorithmslab  3 роки тому +1

      Great to hear that also the quizzes are appreciated!

  • @axlrose5082
    @axlrose5082 2 місяці тому

    You're amazing! I finally understood everything involving this interesting DS. Thank you for explaining the intuition behind every property and behaviour!

  • @imumamaheswaran
    @imumamaheswaran 3 роки тому +3

    I was reading about redis indexing and came across skiplist. Never heard of it before. Thanks for the beautiful explanation! Subscribed to the channel

    • @algorithmslab
      @algorithmslab  3 роки тому +1

      Thanks! I wasn't aware that redis uses skip lists, that's cool. There are some interesting practical decisions in the redis code. E.g., instead of promoting nodes to the next level with probability 1/2, they use a smaller probability. The analysis stays the same, and asymptotically nothing changes, but this allows to optimize the space usage further.

  • @cathynest459
    @cathynest459 2 роки тому +1

    Saved me on exam today!!! Explanation is amazing

    • @algorithmslab
      @algorithmslab  2 роки тому

      Great to hear, thanks!

    • @cathynest459
      @cathynest459 2 роки тому +1

      @@algorithmslab I have been watching your videos all day today! (It’s 2 pm in Korea). It’s amazing and I wished I found this channel earlier! Please do not stop!

  • @spartanzarazua117
    @spartanzarazua117 3 роки тому +1

    Too much thanks a very good explanation and also with animation. There are 15 minutes well invested.
    A new subscriber.

  • @timyturner2943
    @timyturner2943 2 роки тому +1

    Danke sehr mr buchin. incredible video and presentation

  • @JayCina
    @JayCina 2 роки тому +1

    The most lovely explanation!

  • @williamikennanwosu
    @williamikennanwosu 3 роки тому +2

    Nice and clear video, great explanation!

  • @TheAdaya
    @TheAdaya 3 роки тому +1

    great video!! it really helped me studying for a test

  • @retiksingh9123
    @retiksingh9123 3 роки тому +3

    Fantastic work man!! Absolutely spot on. Exactly what i needed

  • @ayeyo4081
    @ayeyo4081 2 роки тому

    6:55 looks like it was actually heads but you just decided to say it was tails for the sake of the lesson 😂
    7:12 not even looking at the coin anymore 😂😂
    thanks for the great video !

    • @algorithmslab
      @algorithmslab  2 роки тому +1

      finally, someone appreciating my coin tossing skills 😄Of course, I did it all in one take 😁... nearly 🙃

  • @empireempire3545
    @empireempire3545 2 роки тому +1

    I absolutely love this, never heard of them and this looks great : O

  • @vctorroferz
    @vctorroferz 5 місяців тому

    Amazing video really good explanation ! Thanks so much

  • @manuelpagliuca
    @manuelpagliuca 3 роки тому +1

    Thank you for this video, really good explanation

    • @manuelpagliuca
      @manuelpagliuca 3 роки тому

      Would be even more awesome if the space complexity was treated as well!

  • @Jarreddesigns
    @Jarreddesigns 3 роки тому

    Thank you for this video, cleared up all my confusion

  • @zakikhurshid3843
    @zakikhurshid3843 Рік тому

    Great explanation. Thank you so much.

    • @algorithmslab
      @algorithmslab  Рік тому

      Thanks for the feedback. Happy to hear that its helpful.

  • @thaynaemillycavalcantesant3687

    Great lecture. Thank you!

  • @bradwang3648
    @bradwang3648 2 роки тому +1

    Very helpful. Thanks!!!

  • @rummanchowdhury3807
    @rummanchowdhury3807 2 роки тому

    Fine explanation!

  • @offswitcher3159
    @offswitcher3159 9 місяців тому

    Great video!

  • @romangavrilovich8453
    @romangavrilovich8453 5 місяців тому

    Hi, thanks for explanation, but it didn’t get the difference between perfect and random list for deletion. What is the difference between deletion of 12 in the first implementation and deletion of 10 in the second? Thanks in advance

    • @algorithmslab
      @algorithmslab  5 місяців тому

      In the first implementation, if we would want to delete the 12, the whole second half of the list would need to change. The 13 would need to take the role of the 12 with 4 levels, while 15 would now only have 1 level instead of 2, and so on. But if it is a randomized skip list, deleting the 12 is easy. All the references into the 12 now need to continue to the next nodes as seen from the 12; however none of the nodes need to change its number of levels.

    • @romangavrilovich8453
      @romangavrilovich8453 5 місяців тому

      @@algorithmslab oh thanks, changing of level is a key to understand the difference :)

  • @Mannershark
    @Mannershark Рік тому

    Hey, that's a familiar face! I remember you from algorithms at TU/e. Good to see you have your own research group now

  • @yoandmario
    @yoandmario Рік тому

    Would you mind providing an example of a geometric data structure that uses skip lists? I want to implement a skip list as my final project for my data structures class and I'm curious how I could implement it further. Great lecture!

    • @algorithmslab
      @algorithmslab  Рік тому

      Let me put the following disclaimer first: The data structures that I had in mind, don't "use" skip lists. They rather use the same concepts as skip lists but transferred to the geometric setting.
      An early paper on generalizing the ideas of skip lists to a multi-dimensional setting is "Randomized Multidimensional Search Trees: Dynamic Sampling" by Mulmuley (dl.acm.org/doi/pdf/10.1145/109648.109662). This paper is also a good starting point for finding more examples (by looking at who cites it).
      Specific data structures that I had in mind when I made the video were the following:
      1.) Another example are Skip-Quadtrees. Citing from the abstract of the paper "We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R2) or the skip octree (for point data in Rd, with constant d > 2). Our data structure combines the best features of two well-known data structures, in that it has the well-defined “box”-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists."
      2.) There is the Delaunay hierarchy by Olivier Devillers. It does not use a skip list, but the concepts that are used for a skip list. Let me cite from the CGAL manual, because I think this shows the parallels nicely: "The data structure is a hierarchy of triangulations. The triangulation at the lowest level is the original triangulation where operations and point location are to be performed. Then at each succeeding level, the data structure stores a triangulation of a small random sample of the vertices of the triangulation at the preceding level. Point location is done through a top down nearest neighbor query. The nearest neighbor query is first performed naively in the top level triangulation. Then, at each following level, the nearest neighbor at that level is found through a linear walk performed from the nearest neighbor found at the preceding level. Because the number of vertices in each triangulation is only a small fraction of the number of vertices of the preceding triangulation, the data structure remains small and achieves fast point location queries on real data." In 1D this would essentially boil down to a skip list, and the probabilistic part of the analysis of the data structure is very similar to the analysis of skip lists.
      Those where the examples that I had in mind when I gave the lecture, but I think I can come up with some more. E.g. closest pair/nearest neighbor queries, see dl.acm.org/doi/pdf/10.1145/177424.177609 and references therein. There are interval skip lists, which one could argue about how geometric they are.
      I hope this is useful, even if these are not direct applications of skip lists. Success with your project!

    • @yoandmario
      @yoandmario Рік тому

      This is very useful and I really appreciate your response, cheers!@@algorithmslab

  • @alxx736
    @alxx736 2 роки тому

    Hey ! Can you make a video of Unrolled Linked list ? There are few places on internet talking about that ,but a lot of misinformation ,not very easy to understand. Thanks in advance!

    • @algorithmslab
      @algorithmslab  2 роки тому +1

      Thanks for the suggestion! I think it's a great idea. However, I unfortunately do know that I won't come around to it anytime soon, because I already have a backlog of videos that I'd like to make but need to find the time for. But I for sure will put it on my wishlist.

  • @SpiritVector
    @SpiritVector Рік тому

    couldn't you prepare the data ahead of time to be more fixed though or?

    • @algorithmslab
      @algorithmslab  Рік тому

      If you have a fixed data set and want to search on the that data set, then you don't need a dynamic data structure, but could use something like a sorted array. However, part of the idea of dynamic data structures is that your data structure may change and you don't (need to) know in advance how it would change. Does that answer your question?

    • @SpiritVector
      @SpiritVector Рік тому +1

      @@algorithmslab Yeah, it does.

  • @Abhay.Bhandari
    @Abhay.Bhandari 3 роки тому

    Have you taught the code for skip lists?

    • @algorithmslab
      @algorithmslab  3 роки тому +1

      No, in my lecture this is an "extra topic", so I am only giving the short introduction that you see in the video. There are some online resources, which nicely discuss how to implement skip lists. Those are actually interesting to read since a simple skip list implementation is typically easily outperformed. There is a nice blog post ticki.github.io/blog/skip-lists-done-right/ about this. Also existing implementations may come with an extensive discussion, e.g., skiplist.readthedocs.io/en/latest/. As was noted in a previous comment redis uses skip lists, so it is also fun to see what optimizations are done there github.com/redis/redis/blob/unstable/src/t_zset.c . Finally, there is also current research on implementing skip lists, typically with the objective of optimized cache usage. A recent such paper is www.vldb.org/pvldb/vol12/p2183-zhang.pdf . But back to the question; if I would add something about this to my lecture, then probably only the level of the first blog post.

    • @Abhay.Bhandari
      @Abhay.Bhandari 3 роки тому +2

      @@algorithmslab I really appreciate your knowledge and work. You are a great mentor and helping us in every aspect. I have seen some of your videos on Algorithms and Data Structures, they are awesome!! I really like the way you teach. I hope you will have more than 10M subscribers.
      Thank you so much sir for your valuable guidance.
      Lots! Of ❤ from India.

  • @benjaminpuebla1510
    @benjaminpuebla1510 2 роки тому +1

    tiene que ser una broma lo de las monedas bro, cómo que log n

    • @algorithmslab
      @algorithmslab  2 роки тому

      I hope I am understanding the question correctly. The main question seems to be about the logarithmic term log n. Intuitively, this comes from the following. All elements are in the lowest level (level 0). Each element is with probability 1/2 in the next level (level 1). As a consequence we expect half of the elements, i.e., n/2 elements in level 1. Of those roughly n/2 elements, each goes with probability also to the next level. So we have 1/2 of n/2, i.e. n/4 elements at level 2 in expectation. This continues, so we have the sequence n, n/2, n/4, n/8, n/16, and so on, until no elements are left. With each level, the expected number of elements is divided by 2. Let's say we don't need extra levels when the expected number of elements is 1. To know how long the sequence n, n/2, n/4, ... is, we need to know how often we can divide n by 2 until we hit 1. But this is exactly log(n) up to rounding.
      This is not the whole story though. To keep things simple, in my video I stopped when the expected number of elements was 1. This is enough to analyze skip lists. But more commonly the expected number of levels is analyzed, or more specifically an analysis with high probability is performed. For more details on this, I would recommend Dave Mount's lecture notes, that you can find on the corresponding course page: www.cs.umd.edu/class/fall2022/cmsc420-0201/lectures.html
      The coin flips symbolize randomization, more specifically a coin flip gives you a random bit, but yes, the actual coin flipping in the video was simply intended to make the randomization more tangible.

  • @spoomer94
    @spoomer94 Рік тому

    5:27 why are u using big Theta symbols and talking about worst scenario?
    use Big O symbol, no?

    • @algorithmslab
      @algorithmslab  Рік тому

      Worst-case running time vs. O/Omega/Theta is something that needs a bit of elaboration.
      Worst-case running time -as used in algorithm analysis- is a function in n. Here we simply measure the "steps" to the right and downwards, because we can do each such step in constant time. O/Omega/Theta is used to analyze the asymptotic behaviour of functions; they give an upper/lower/tight bound. For the worst-case running time we are typically most interested in an upper bound, i.e., O(). But in general we are looking for an upper bound that ideally is tight (i.e. upper and lower bound); if we have such a bound using Theta() makes a stronger statement.
      In the answer to the quiz, strictly speaking, I only argue the upper bound, i.e., O(log n). But it is quite obvious that search always takes at least log n steps downwards, thus, we also have an Omega(log n) bound on the worst-case running time. Combining those two gives us Theta(log n).
      Now back to the quiz: Why didn't I simply use O()? The problem is that B,C, and D then all would be correct answers. Since search takes O(log n) steps, it is certainly also true that it takes O(sqrt(n)) steps and also O(n) steps, since by O() we are only stating upper bounds. Therefore, to design the quiz such that it is meaningful and correct, I have to use Theta here.
      Clear?

  • @luca-ik2bo
    @luca-ik2bo 2 місяці тому

    amazing

  • @Muhammed-zn7ft
    @Muhammed-zn7ft 3 роки тому

    Thanks

  • @زهراءرسنرداد
    @زهراءرسنرداد 2 роки тому

    I need the chapter please 🥺

    • @algorithmslab
      @algorithmslab  2 роки тому

      That's actually a great question/remark. As far as I know, skip lists are not covered in the CLRS book. They are covered in some other textbooks and lecture notes. One resource that I like to recommend is Dave Mounts lecture notes. You can find them here www.cs.umd.edu/~mount/420/Lects/420lects.pdf (Lecture 11) and newer/more detailed ones here: www.cs.umd.edu/class/fall2022/cmsc420-0201/lectures.html . In my video I was aiming at a light introduction, and those resources are great for a deeper dive.

  • @jakoon_the_manic
    @jakoon_the_manic 6 місяців тому

    Cheers

  • @YouyuanKe
    @YouyuanKe 2 роки тому

    The probability of getting heads ain't 1/2 for me but sure.

    • @algorithmslab
      @algorithmslab  2 роки тому

      Interesting point 😁 Computers have a similar problem: they can't really generate random numbers. Thus, skip lists require a good pseudo-random number generator. en.wikipedia.org/wiki/Pseudorandom_number_generator

  • @rafailpapoulias4237
    @rafailpapoulias4237 Рік тому

    Can you please explain how you calculate that E equals 2???

    • @algorithmslab
      @algorithmslab  Рік тому

      The argument that I make in the video is that this is a geometric distribution with p = 1/2. See en.wikipedia.org/wiki/Geometric_distribution#Expected_value_of_X for the calculation.
      Essentially, we "succeed" in the first step with probability 1/2. We fail with probability 1/2, and in that case the situation is like before the first coin flip, just that we already did a flip. Therefore E = 1/2*1 + 1/2*(1+E). This solves to E=2.
      You can also calculate E more directly. Then you get E = 1*1/2 + 2*1/2^2+ 3*1/2^3 + .... There are various, not too complicated ways to see that this sum equals 2, but the argument above is simpler.

  • @tino_
    @tino_ Рік тому

    very well explained, thank you so much!