Sorting Secret - Computerphile

Поділитися
Вставка
  • Опубліковано 1 лют 2025

КОМЕНТАРІ • 429

  • @arirahikkala
    @arirahikkala 8 років тому +377

    The missing technical term: This construction is a *sorting network*, just drawn in an unusual way. The basic point of the video is that sorting networks can't express the difference between insertion sort and selection sort.

    • @ancbi
      @ancbi 8 років тому +12

      Exactly. I tried to point out an example of difference at best-case time. but what you said captures the whole deal.

    • @nG27227
      @nG27227 8 років тому +29

      Ari Rahikkala this is probably the best comment I've seen that clarifies what's in the video. It seems a decent way to visualize and intuitively see how sorting algorithms work, but I don't think it's accurate to draw conclusions of "sameness" based on these diagrams.

    • @YammoYammamoto
      @YammoYammamoto 6 років тому +7

      Sorry for necro - but I was nearly exploding with frustration at saying that insertion sort is the same thing as selection sort. Sure - they're both sorting stuff. But at that level of abstraction one might as well resign to saying it's "doing computery stuffies".

    • @VanWarren
      @VanWarren 4 роки тому +16

      Not missing: "Sorting network" is introduced at 3:06.

    • @IllevensKO
      @IllevensKO 4 роки тому

      Ivan the space biker ! The right man in the wrong place can make all the difference... in the world.

  • @陈瀚龙
    @陈瀚龙 6 років тому +7

    This video made my day. I was totally burnt, and now I'm in the mood to study on my whiteboard again. Thanks Professor.

  • @donperegrine922
    @donperegrine922 8 років тому +337

    Indeed; Number two pops out the bottom.....

  • @playingwithdata
    @playingwithdata 8 років тому +153

    It's an interesting exercise that I'm sure will help people visualise these sorting algorithms and understand how they are similar. However to say they are "the same" is pushing it when you're expressly showing that one is operating 'column-wise' and one 'row-wise' in your diagram. The next step here would surely be to get students to dig into this difference and analyse the data structures and operations involved in each approach and the complexity and practical performance implications.

    • @javiergimenez40
      @javiergimenez40 8 років тому +23

      from a mathematical stand point they are the same.
      HOWEVER, because of how memory is managedin a computer(both on write and read) results may vary.
      bubble sort also falls in this kind of algorithm

    • @JxH
      @JxH 8 років тому +1

      Yep, mwtbones you're correct. Same basic mechanism, but the clocking is different. If this was built in actual hardware, then the clock lines would have to be transposed, rewiring the clock signals between the rows and columns. Still, it's an interesting observation. I wonder if there's a diagonal clocking interpretation?

    • @TechyBen
      @TechyBen 8 років тому +4

      The sorting is the same... the perspective is not. That is, mechanically they could be opposites (one could be more computational intensive one could be more memory intensive) but they are the "same" when swapped back over to each other.

    • @brodaclop
      @brodaclop 8 років тому +12

      Or to put it another way: statically they are the same. Once run to completion, they execute exactly the same comparisons on exactly the same input.
      Dynamically however they aren't the same because they do so in different order.
      This is very similar to the connection between breadth-first traversal vs. depth-first traversal of a tree. They visit exactly the same nodes and the same edges, which hints at some kind of underlying structural connection between the two, but dynamically they're clearly different.
      To see the difference between the two algorithms, you only need to remove the "once run to completion" clause. What if we ask for just the three smallest numbers instead of a full sort? Selection sort will obviously behave very differently (and much better) than insertion sort.

    • @Joviex
      @Joviex 8 років тому +5

      Except this is not called Mathematicsphile. Saying they are the same while under the label of Computer oriented information is not truthful to the actual topic when someone goes to implement one over the other thinking they are equivalent.

  • @al-du6lb
    @al-du6lb 3 роки тому +10

    This is a great illustration. I've always thought about how programming is almost like a stream of liquid flowing, and in this you can really visualize that concept.

  • @viniciusalbuquerque8030
    @viniciusalbuquerque8030 7 років тому +5

    This guy is an amazing teacher. I wish I could find his lectures online. We don't have much of that in Brazil.

  • @deepjoshi356
    @deepjoshi356 8 років тому +96

    I know some basic definition about all 4 algorithm named
    insertion sort: find correct place for the element
    selection sort : find correct element for the place
    &
    merge and quick sort uses divide and conquer
    merge sort doesn't divide it merges properly and quick sort doesn't need to merge it divide properly

  • @sagebshaw
    @sagebshaw 8 років тому

    The picture is kind of neat. Saying the sorting algorithms are the same is really just playing with words. I can argue that all sorting algorithms are the same because they output a sorted array. I can see it being helpful one first learns about these two algorithms to compare them in this way. Ultimately helpful. Good video.

  • @byteeater7662
    @byteeater7662 4 роки тому +7

    The first one is really bubblesort (the simplest, unoptimized variant). See how beside picking 1 in the first column it also swaps 2 with 3. Selection sort, unlike this bubblesort and insertion sort (also unoptimized because it keeps comparing in the tail after having found the right place for an element), isn't oblivious (which means performing a sequence of comparisons at some pairs of positions depending only on the length of the input, not the results of previous comparisons) and thus cannot be expressed with a sorting network.

  • @iiN1GH7M4R3ii
    @iiN1GH7M4R3ii 6 років тому

    How did this not make the trending list??? Thank you so much this is what I needed!!!!!

  • @watchphysics
    @watchphysics 8 років тому

    One of the best thing I have watched after a long time related to Computer Science. Nice Perspective.

  • @sabriath
    @sabriath 8 років тому +21

    This is correct on half of a fundamental position....but not on the programming position. Selection sort specifically finds the lowest in the muck, removes it and tags it onto a list; while insertion sort picks a number from the muck and finds the place it needs to be put in the list. This means that selection sort works with any list type, while insertion sort can only effectively work on double-linked lists (otherwise you are constantly moving blocks of memory as the list expands).
    The other half of the fundamental position that proves that it is NOT the same is simple.....what if halfway through the sort, the program stops? Selection sort would leave some of the data sorted and a leftover muck to deal with, while insertion sort might as well look like 2 mucks. This becomes somewhat important in file systems and indexing....if the power goes out during a sort, Selection is better than Insertion for recovery of "where you left off."
    It's also important to note that Selection sort gets faster as the muck reduces over time....while Insertion is fastest at the beginning and gets slower over time.
    Absolutely none of this was mentioned in the video, nor did it seem like Hutton was even going to hint toward it (he stated he was going to write a paper on it being the same, when it obviously isn't when taking as a full context).
    Sidenote: Any relation to Tim J. Hutton?

    • @TylerCrompton
      @TylerCrompton 8 років тому +6

      Err, insert sort works well on singly-linked lists as well. You just have to keep two extra references (to the nodes preceding either cursor).

    • @Minastir1
      @Minastir1 8 років тому +1

      Well the picture actually illustrates the change in sorting speed over time pretty well

    • @ThisNameIsBanned
      @ThisNameIsBanned 8 років тому +8

      Your problems are regarding the "implementation" , the types them self use the same model of steps in this clip here.
      Regarding your "power out" problem, you wouldnt write your not completly sorted array to the harddrive, you work with it in your working memory till its sorted if you want to avoid any problem regarding that (and in any scenario where its really important to "finish" the sorting, you would use some kind of transaction that makes sure its finished before its used anywhere else).
      ----
      Implementation problems are indeed important, no question, but this clip here doesnt take that into account, as it focuses on the "model" of the sorting types and compares the step by step approach in the shown visual diagram.

    • @5nefarious
      @5nefarious 8 років тому +1

      Also, isn't insertion sort much faster on nearly sorted lists?

    • @TylerCrompton
      @TylerCrompton 8 років тому

      No, you're not, because you don't have to update pointers to the previous node.

  • @schulmastery
    @schulmastery 8 років тому +54

    The strongest selling point of insertion sort is that it is adaptive. Once the insertion point is determined, the pass can be aborted, while selection sort must look for a min/max across the entire unsorted region. In the vid, he implements an insertion sort that is gimped in its adaptability, and then says its the same as an algorithm that is notoriously not adaptive. Really hammering this point, his machine would take n^2 time to sort a sorted list, just as selection sort would; but insertion sort implemented properly only takes n time. I cant take the engine out of a car, and then comment on how similar it is to a bicycle.

    • @salerio61
      @salerio61 8 років тому +5

      Insertion sort has O(n^2) like selection Sort. What kind of sort has O(n)? Even QuickSort isn't that fast.

    • @ZacharyMathematica
      @ZacharyMathematica 8 років тому +11

      O(n^2) worst case but O(n) best case for insertion. Whereas worst case for selection is always O(n^2). Better to play the odds that might be slightly faster than the odds when you know it's going to be awful.

    • @schulmastery
      @schulmastery 8 років тому +13

      You're certainly correct on the BigO, but I'd be remiss if I didn't stick up for my boy, Selection Sort. Yes it is guaranteed to run in n^2 regardless of input. However, it is among the most conservative of any sorting algorithm with regards to writes, especially in the worst case. In such a case, SS only has to perform n-1 swaps, which can be very advantageous if write speed is dramatically lower than read speed, or endurance of the write-to medium is of greater concern than run-time.
      To that end, this demonstrates another weakness in the above video. His "finding the max/min" is in reality a bubbling of said max/min to the correct position, NOT a swap to a searched-for position, which is how selection sort actually works. I think he actually said the words "trickles down" which is textbook bubble sort.

    • @xwtek3505
      @xwtek3505 6 років тому +1

      schulmaster There is cycle sort.

    • @maxithewoowoo
      @maxithewoowoo 6 років тому +1

      @schulmaster Any algorithm can be made adaptive. Any algorithm can be optimized. In fact, I found a method to optimize selection sort to achieve O(n) best case, a bit more complicated but I can write it out in another comment if you wish. It uses a few extra pointers and requires the input list to be reverse sorted (for best case) but it still finds the minimum during each pass.
      Point is, any algorithm can have optimizations like the one you pointed out. Your modification to insertion sort is still a modification. It just feels like a small modification because of your choice of programming language and CPU architecture. In some programming languages, aborting passes is not easy. Ultimately, what optimizations are "simple" and which are "complex" is arbitrary. But this video isnt about optimizations. This video is simply showing that, given the purely mathematical formulation of sorting given by the professor, insertion sort and selection sort are identical. If you want to try adding in optimizations (which ultimately starts factoring in your programming language and architecture), then its a different argument entirely

  • @KubrickFR
    @KubrickFR 8 років тому +62

    I didn't care for the point he makes but I really like the "boxes in a triangle" explanation

    • @27klickslegend
      @27klickslegend 8 років тому +7

      I cared for the point AND the "boxes in a triangle" explanation!

    • @DrEvil-uw1ju
      @DrEvil-uw1ju 8 років тому +3

      Then you watch these for entertainment. That's not very effective.

  • @robertbrummayer4908
    @robertbrummayer4908 3 роки тому

    Your videos are excellent. You explain computer science concepts so well. Great job!

  • @DjVortex-w
    @DjVortex-w 8 років тому +1

    Actually what is demonstrated here is bubble sort (which is the worst O(n^2) sorting algorithm in existence). It works _exactly_ as bubble sort, and performs _exactly_ as many comparisons (n^2/2) and swaps, and the order of the elements after each step is exactly as in bubble sort.
    Insertion sort, in most cases, performs less comparisons and swaps than bubble sort (because when you are moving an element towards the beginning of the array, you can stop when it's in its correct place, and don't need to keep going). The worst case scenario is when the array is in exact reverse order, in which case insertion sort performs exactly as many steps as bubble sort. (It essentially does the same thing in this case, just in a different order.)

  • @dashohoxha
    @dashohoxha 8 років тому +1

    This is an (abstract) sorting machine that actually does the sorting in linear time. When we view its operation vertically it resembles or reminds us of selection sort, when we view it horizontally it looks like insertion sort. However it is neither selection nor insertion sort.
    The sorting algorithms that we know and speak about (insertion, selection, etc.) are defined assuming a Turing machine (and Von-Neumann architecture, which is a Turing machine implementation). And they are not the same because they have different properties.
    But the underlying assumption that the sorting algorithms are defined for the Turing machine often goes silently, teachers don't point it out and students don't know about it. This assumption is the real secret of the sorting algorithms (in the sense that people usually don't know about it).

    • @jonaskoelker
      @jonaskoelker 6 років тому +1

      > This is an (abstract) sorting machine that actually does the sorting in linear time.
      It takes time that's linear in the size of the triangle (assuming you execute the triangles one at a time), but for n inputs you need a triangle of size Θ(n²), specifically n(n-1)/2.
      For comparison-based sorting you can't do faster than O(n log n). Proof(ish): you can order n elements in n factorial (n!) different ways. A comparison gives you a one bit output, so k comparisons can distinguish 2^k different orderings-said another way, to distinguish m orderings, you need at least log m comparisons. Letting m = n! we see that you need log(n!) = log(1 * ... * n) = log(1) + ... + log(n). Half of the numbers in 1..n are at least n/2 and the other half is at least 2 (except for 1), so we get log(n!) >= (n/2) * log(2) + (n/2) * log(n/2) - 1 = (n/2) * (log(2) + log(n) - log(2)) - 1 = (n/2) * log(n) - 1 which is Ω(n log n).
      (The logs are all base 2. If you select n such that n/2 > 4, i.e. such that 4 occurs in the small-numbers half, you can adjust the estimate to have a term of log(2*2) + (n/2 - 1)*log(2) = log(2) + (n/2)*log(2) = 1 + (n/2)*log(2). The extra 1 cancels out the "- 1" term and you get something which is aesthetically pleasing and easily seen to be Ω(n log n) without having to remember the exact rule whereby you can remove negative constant terms. Similarly if n is odd and 5 is in the smaller half, log(5) > log(4) = log(2) + log(2) and so you get a free "extra term" to compensate for splitting into "uneven halves" where the big-numbers half has one more element than the small-numbers half.)

  • @praf2
    @praf2 8 років тому +1

    i love this channel! i feel lucky i found it xD a lot of interesting stuff put in simple words and explanations.

  • @AlphasysNl
    @AlphasysNl 8 років тому +14

    When seen this way the two may seem equivalent, but when programmed, insertion sort can be optimised a bit to make it faster, since one can stop comparing once the location is found. This can not be done with selection sort.

    • @andrasbiro3007
      @andrasbiro3007 8 років тому

      Yes, and also there are other possible improvements, like binary search. But the insertion process is still slow, so it doesn't make much difference.

    • @advanceringnewholder
      @advanceringnewholder 3 роки тому

      i know just a little about programming but when I see insertion sort, I think to myself, huh, if I have to pick an algorithm, insertion sort is probably the one that I can implement more easily and more optimally

    • @simonmultiverse6349
      @simonmultiverse6349 3 роки тому +1

      @@advanceringnewholder Nah, pick a random pair and swap them if they are in the wrong order. Repeat if necessary. After a very long process (upper limit: infinity) the entire set of items will be in order.

    • @advanceringnewholder
      @advanceringnewholder 3 роки тому

      @@simonmultiverse6349 it's kinda like bogosort

  • @jojojorisjhjosef
    @jojojorisjhjosef 8 років тому +1

    This could be an entire field of science, the way you represent a fundamental law changes the law.

  • @luisoncpp
    @luisoncpp 8 років тому

    Many sorting algorithms can be seen as the same if you see them in certain way.
    For example:
    Selection sort chooses the smallest of the unsorted elements and put it at the beginning, repeat until everything is sorted.
    If instead of choosing the smallest number you choose the largest number and put it at the ending, it's the same algorithm, just with another comparison function and with other container to store the elements.
    Now, if you use a heap to choose the largest element between the unordered ones, you have heapsort, but the basic idea is the same:
    Take the "best" one of the unordered set, and add it to the end of the ordered set until the ordered set is complete.
    Another example: if you create binary search trees in the standard way, in the end you are doing the same as quicksort: it chooses a privot(the root) and then you put all the smaller elements in one larger elements in the other side.
    I know a context where selection and insertion is different: if you have a linked list with random access to store the output, insertion would be O(NlogN) (you make a binary search to know where to insert the new element, and then insert it in O(1)) while selection would be still O(N^2).
    The idea of a linked list with random access may seem silly, but there are some situations where constraints are very close, for example, ordering a deck of cards in real life, another example may be to order something when the comparisons are very expensive (for example, if each comparison is a query to a web server) and the swaps are very cheap so you want to optimize the comparisons but not the swaps.

  • @Pedritox0953
    @Pedritox0953 4 роки тому

    I love the Prof Hutton analogies !!

  • @stuartthegrant
    @stuartthegrant 8 років тому +65

    Now implement the sort with logic gates.

  • @progtom7585
    @progtom7585 4 роки тому

    im going to ask my uni lecturer if i can just "draw some pictures and forget about your fancy computing" 🤔
    excellent video, thanks for sharing!

  • @ErikOosterwal
    @ErikOosterwal 2 роки тому

    This comparison hints at one of the differences between data-flow programming vs. process-flow programming.

  • @simonmultiverse6349
    @simonmultiverse6349 3 роки тому

    Your pictorial idea is also known as a data-flow diagram. It shows where the data goes (where the numbers go). It also shows which numbers depend on which other numbers.With this information, it still leaves you with _some_ freedom of choice about the order in which you do various operations. Making these choices will give you an algorithm.

    • @simonmultiverse6349
      @simonmultiverse6349 3 роки тому

      AND WHAT IS MORE.... if hardware parallel processing is available, you have many more choices about which bits of hardware (doing the processing) are assigned to which bit of the problem, and when.

  • @haloz4
    @haloz4 8 років тому +1

    The main, VERY IMPORTANT, difference between insertion and selection sort is that insertion sort has the advantage that the operation of insertion can be made faster than selection. Inserting into a sorted region can be sped up by something like a binary search, while the operation of selection has no such optimization.

    • @andrasbiro3007
      @andrasbiro3007 8 років тому

      Yes, finding where to insert can be faster, but the insertion itself is slow, because you have to move half of the numbers on average.
      And it doesn't really matter, because there are much faster sorting algorithms (quicksort being the most famous).

  • @theswip3r
    @theswip3r 8 років тому

    Insertion sort has a best case performance of O(n) while selection sort is O(n^2) for all cases. Selection sort is often useful in real time applications (due to its consistency), so it's important to make the distinction. Saying they're the same because of rows and columns is like saying quick sort and merge sort are the same because 'one is divide-then-conquer and the other is conquer-then-divide'.

  • @nathantron
    @nathantron 8 років тому +1

    It's also very easy to think of insertion sort like using marbles of different sizes, and pushing them in a line, and as the larger ones are higher value, and then fall into the next slot as long as it's bigger than the previous marble. So you're just moving it right until it falls into place. :) so it looks like a line:
    @Oo. pushing over a wedge \./
    Wish we could post inline images here.

  • @IvanFernandes94
    @IvanFernandes94 8 років тому +2

    They are similar and this triangle view show their similarities but they are not the same. An algorithm is fundamentally a sequence of steps, when you change that sequence of steps you have a different algorithm. The visual model is interesting but the moment you actually implement it you have to make a decision on how the sequence of actions will take place, and that decision will produce different algorithms.

  • @mathnerd3141
    @mathnerd3141 8 років тому

    In practice, though, insertion sort is much more efficient, because you can stop comparing as soon as you've found the place to insert the new element, whereas in selection sort you must scan through the array every time. The growth rate (O(n^2)) is the same, but insertion sort is the one that's actually used - it's actually much faster than merge or quick sort for small arrays. (Bubble sort is even worse, as it potentially does a swap for every comparison, whereas insertion and selection sorts only swap once once it knows where the element goes or which element is the smallest, respectively.)

  • @ImperialDrylander
    @ImperialDrylander 6 років тому

    what I've learned: insertion and selection aren't the same, as a sorter myself, I know how they work
    Insertion: It'll try to make the final product as best as possible, and put new elements in their position
    Selection: It'll look at each item, and remember where it goes, then in the next pass, it puts it in the right place.

  • @gulllars4620
    @gulllars4620 7 років тому

    So basically, when represented as a sorting network, the difference between Selection Sort and Insertion Sort is row-wise vs column-wise ordering of operation. In CPUs and memory hierarchies order of operations can matter. Especially when the data to be operated on doesn't fit in a single cache line, and you need round-trips to memory. In modern CPUs the performance limitation for lightweight operations are commonly limited by memory access, and sometimes by branch miss-predictions. The result is it may be faster to do 100K logical operations with data from L1 cache and no branch miss-predictions than 10K logical operations with 1000 memory accesses and 1000 branch miss-predicions. That's a bit hyperbolic and probably don't represent logic that leads to the same result, but the take-away is an algorithm with higher branch miss-prediction and/or more memory accesses can be slower despite maybe even an order of magnitude less operations.

    • @simonmultiverse6349
      @simonmultiverse6349 3 роки тому

      There is usually a cache for instructions and a cache for data, and the CPU designer has to make choices for both. The software writer may be able to ensure that data which need to be combined (e.g. to find larger & smaller of two numbers) are put close to each other, so a limited region of cache is required, thereby improving speed.

  • @foolo1
    @foolo1 8 років тому +30

    This video disregards the implementation. Bu the implementation is important.
    For example, the algorithms have different best case complexity.

  • @Apophlegmatis
    @Apophlegmatis 3 роки тому

    Alternative title: computer scientists stumble upon hardware description language ideas for sorting

  • @ZedaZ80
    @ZedaZ80 4 роки тому

    Selection Sort: n(n-1)/2 comparisons, 2n-2 moves
    Insertion Sort: ~log(n!) comparisons (often represented by O(n*log(n)), and ~n(n-1)/4 moves.
    Did I get those right? For the IS comparisons, I am assuming the use of a binary search to determine where to insert incoming elements.
    A Natural Insertion Sort gets better performance by taking advantage of naturally occurring runs. You can then optimize it further by inserting runs (if you are trying to insert the run (A, B), then you know that where A is inserted is a strict lower limit of where B can go). You basically end up with a Natural Insertion Sort, but the "insertion" is actually a "merge."
    I personally like Insertion Sort more than Selection Sort, because it is easier to take advantage of that already sorted subset. (The mathematician in me: "something something the beauty of induction.")

  • @jakoblindgren6604
    @jakoblindgren6604 8 років тому

    There is one small difference between insertion and selection sort when the are put into code. Insertion sort is one of few that can realize that an incoming list is already sorted and terminate early

  • @pierredufour5365
    @pierredufour5365 8 років тому

    With insertion, when you see that the number you insert is smaller than the next one, you can stop there for that line, so you save half the time (in average)

    • @pierredufour5365
      @pierredufour5365 8 років тому

      And you are comparing the two worst sorting algorithms possible

  • @QuotePilgrim
    @QuotePilgrim 8 років тому +9

    Insertion sort is faster than selection sort, therefore they can’t be the same.

    • @25NN25
      @25NN25 8 років тому

      from the drawing i would've guessed that selection sort is faster...

    • @goncalobaltazar
      @goncalobaltazar 8 років тому +3

      Benjamin steffen If you are implementing insertion sort on something like a list. You would stop comparing after finding the right place to put the next number. The elements after that are already sorted.

    • @notroop
      @notroop 8 років тому +1

      Aren't they the same speed?

    • @slartibartfast336
      @slartibartfast336 8 років тому +3

      Because the first sort he went over is actually Bubble Sort, not Selection.

    • @Pyriold
      @Pyriold 8 років тому +1

      Insertion sort can be faster because you can use binary search to find the insertion point, but that's just a detail.

  • @drulli6
    @drulli6 8 років тому

    But even though they have the same end result they are still different because they behave differently. Insertion sort does a lot of moving the data around which can sometimes be expensive while selection sort does a lot of comparisons. So depending on whether comparing or moving is expensive determines which sorting algorithm is faster, so they aren't exactly the same but it is still a very nice visualization trick.

  • @DanielPage
    @DanielPage 8 років тому +23

    Selection Sort and Insertion Sort are not the same algorithm. I think the speaker should have been more precise because they are algorithmically/mathematically not the same algorithm. I have a feeling I'll be now needing to correct some subset of students that will watch this video and think they will just not need to know one or the other and confuse the two.
    On the other hand, I do think the visualization is helpful, and a neat way of explaining the two algorithms though that shows they are more like "duals" (very similar).

    • @ring_raitch
      @ring_raitch 8 років тому

      Daniel Page - The way he has drawn the algorithm is as a directed graph, with nodes representing functions. By construction, what the picture/video shows is that the two algorithms are mathematically isomorphic. The same number of comparisons are being done (nodes visited), and the same information cascade is occurring (edges travelled). The two names are only talking about a difference in order of operations, which topologically makes no difference.

    • @DanielPage
      @DanielPage 8 років тому +11

      +Thurston Sexton Yes, and that is relevant to my comment how? Still does not imply the algorithms are the same. You interpret the graph in two different ways to see what the algorithms do, that's the subtle difference here. It still does not mean the algorithms are the same algorithm. I mean, I can show you two isomorphic graphs whose graphs have a representation that hold very different meaning. That doesn't imply the graphs are identical in the same sense. It ONLY means the graphs are isomorphic. That doesn't mean the two algorithms are the same. They're literally not. If they were, you'd see them do the same operations every step.
      As an educator and computer scientist, I see the handiness in the diagrams, but they do not prove they are the same algorithmically (which is the sense that was promised at the start, the speaker should have been more precise). It will only confuse people as that claim is not true.

  • @LudwigvanBeethoven2
    @LudwigvanBeethoven2 6 років тому +1

    That was a nice secret.
    For those who blame him for differences in two algorithms, of course "algorithms" are different.
    He clearly said the difference is by looking at rows or columns.
    By interpreting the picture differently (row/column wise) you get the two algorithms.

  • @Xclann
    @Xclann 8 років тому +35

    Although this is an interesting observation, saying the two sorting algorithms are the same just because of this picture is stretching it a bit far.
    Otherwise, I can say that quick sort with a pivot being the first element is the same as insertion sort is the same as selection sort... but such statement is extremely misleading.

    • @Kram1032
      @Kram1032 8 років тому +16

      that depends entirely on what you mean by "the same" which isn't at all a trivial thing.
      For instance, if all you care about is that your algorithm sorts numbers correctly, then _any_ correct sorting algorithm is "the same".
      But of course, if you care about efficiency, implementation details will matter.

    • @MrNacknime
      @MrNacknime 8 років тому +2

      They are literally exactly the same if you execute them in parallel

    • @eduardogomes4865
      @eduardogomes4865 8 років тому

      XClann Of course not. They run in the same time and memory. They even make the SAME COMPARISIONS.
      Not misleading at all, that's actually mathematically accurate.

    • @Xclann
      @Xclann 8 років тому +2

      +Eduardo Gomes Nope. Quicksort runs in n log n in best cases. The best case for insertion is n and selection is n^2.

    • @Kram1032
      @Kram1032 8 років тому +2

      XClann I'm pretty sure Eduardo Gomes was talking explicitly about Insertion- and Selection-sort in that comment.

  • @wariolandgoldpiramid
    @wariolandgoldpiramid 8 років тому

    the title made me think of Harry Potter / Pottermore sorting.
    you guys uploaded this one day after the release of Fantastic Beasts, so of course Harry Potter is on everyone's minds

  • @ring_raitch
    @ring_raitch 8 років тому +1

    there's quite a bit of disagreement here in the comments, but the fact is that the two algorithms, drawn as they are here (a directed graph), are isomorphic. This is a strong version of "sameness", tantamount to saying that any proofs about the behavior of one will apply to the other, purely via topology.

  • @MrNacknime
    @MrNacknime 8 років тому +63

    Actually if you parallelise them, Insertion, Selection, and EVEN Bubblesort are all the same!

    • @ShobhitVashistha
      @ShobhitVashistha 8 років тому +5

      I was thinking the same thing, and if you use sorting networks, these all algorithms would be O(n) of course the bigger the input the bigger the network needed

    • @MrNacknime
      @MrNacknime 8 років тому +1

      Shobhit Vashistha
      I'd love them to make a video on bitonic merge sort

    • @Unique12darkobit
      @Unique12darkobit 8 років тому

      +Shobhit Vashistha what is sorting network ?

    • @MrNacknime
      @MrNacknime 8 років тому +1

      Gladius Sapientia
      Basically just a big amount of comparators (the 2 input, 2 output boxes from the video) that work in parallel. If you have n/2 comparators, sorting n numbers works in just O(n) time. Bitonic merge sort is an algorithm that needs O(n*log^2(n)) comparations in total, and works in O(log^2(n)) time.

    • @rmsgrey
      @rmsgrey 8 років тому +1

      Bubblesort has different dependencies than Insertion or Selection. For this particular sequence, Insertion/Selection sort never compares 1 and 5, but Bubblesort does.

  • @Fiyaaaahh
    @Fiyaaaahh 8 років тому

    Surely this pictorial view holds for every O(n^2) sorting algorithm. It is basically a n^2 grid with binary comparison boxes on each vertex, and every different O(n^2) sorting algorithm has a different order of executing all vertices.

  • @amrsaber5457
    @amrsaber5457 8 років тому +4

    it would have been really great if you first define each one of them, then show that they are the same !
    also it would be awesome to explaine some of n log(n) sorts like merge or quick sort

    • @liesdamnlies3372
      @liesdamnlies3372 8 років тому

      They've done videos on those before.

    • @amrsaber5457
      @amrsaber5457 8 років тому

      lies damnlies
      where is that, what is it called ?

  • @vinzent1992
    @vinzent1992 3 роки тому

    This is absolutely brilliant.

  • @MagmaMusen
    @MagmaMusen 8 років тому +68

    What is this accent?

    • @Computerphile
      @Computerphile  8 років тому +26

      +MagmaMusen Scottish u believe >Sean

    • @25NN25
      @25NN25 8 років тому +4

      +MagaMusen u here? really surprises me :)

    • @MagmaMusen
      @MagmaMusen 8 років тому +6

      Thanks, love it. Interesting video as always. :)

    • @jonlanghoff
      @jonlanghoff 8 років тому +2

      My thoughts were "Scotsman gone to the US"..?

    • @Computerphile
      @Computerphile  8 років тому +20

      Cheers :) (what's always interesting is replying to comments through the UA-cam studio app as it doesn't actually show you what video the comment is about, though I had a fair idea having just put that one live... Now confirmed on desktop machine!) >Sean

  • @ThePyrosirys
    @ThePyrosirys 8 років тому +2

    Just because this drawing is both types at the same thing, it doesn't mean that all insertion and selection sorts are the same. Of course this drawing is going to look like both, since the numbers have to get to a specific end point, but on the sheet of paper, you don't see what the process to get to that end point is.

  • @JahMusicTube
    @JahMusicTube 8 років тому

    Beautiful explaination! Great video! :)

  • @DustinRodriguez1_0
    @DustinRodriguez1_0 8 років тому

    While they might look the same, they are quite different when actually run. Inserting an element into the middle of an array is far more expensive than appending to the end.

  • @wkoell
    @wkoell 8 років тому

    Would like to thumb up this video million times :) Really, whatta charming pearl!

  • @gavintillman1884
    @gavintillman1884 3 роки тому

    That's elegant! Not made that connection before

  • @PanduPoluan
    @PanduPoluan 4 роки тому

    Wow, that's fascinating!

  • @amazingvipul8392
    @amazingvipul8392 4 роки тому

    Interesting take on things. Is it inspired from functional thinking?

  • @tabaks
    @tabaks 8 років тому

    Not the same, his pictorial only proves that sorting is - sorting. The two differ insomuch that a >procedure< for them is different for each, hence - they're different.

  • @gtsteel
    @gtsteel 7 років тому

    Although quicksort and merge sort are very different, with different asymptotic performance, quicksort is equivalent to unbalanced tree sort (in Haskell they are identical due to lazy evaluation).

  • @xitement7504
    @xitement7504 4 роки тому

    great way of portraying it.

  • @TokyoTransit
    @TokyoTransit 5 років тому

    Fantastic! Thank you so much for this video

  • @AndrewRock
    @AndrewRock 4 роки тому

    Quicksort and mergesort do basically the same thing, split sequences into two and put them together again. The only difference is in which process do you do the comparisons.

  • @cuttheskit7905
    @cuttheskit7905 8 років тому +1

    Are you guys working on a video on the Investigatory Powers Act and its implications for the privacy of UK citizens? If not then I think you should be looking into it. Maybe you could even get it ready for when it gets Royal assent.

  • @pyrokinetikrlz
    @pyrokinetikrlz 8 років тому

    MIND = BLOWN!

  • @PointB1ank
    @PointB1ank 8 років тому

    Great video as always!

  • @slartibartfast336
    @slartibartfast336 8 років тому +1

    I'm dubious that the first method described is actually Selection Sort. Typically, you'd select the smallest element, and then swap it with the first element in the list. In fact, the process as he's demonstrating is really Bubble Sort, not Selection -- he even hints at this with the "ripples to the bottom" bit.

    • @pihungliu35
      @pihungliu35 8 років тому

      Agreed. The elements starts 5,2,3,1,4, and after one column it becomes 5,3,2,4. This is the evidence that the sorting algorithm is actually Bubble, not Selection.
      The distinction between Bubble and Selection is very subtle if you reading the program:
      Bubble: for(i) for(j) compareAndSwap(j, j+1)
      Selection: for(i) for(j) compareAndSwap(i, j)
      The sorting network presented is doing neighboring comparison, so it is Bubble Sort.

    • @pihungliu35
      @pihungliu35 8 років тому

      After some serious thought, I'd like to take back the previous comment.
      My opinion now is that this order is BOTH a Selection Sort and a Bubble Sort, just the array content is read differently.
      Let's take the situation around 4:37 just before 2 and 4 compared.
      The Bubble Sort content at that time is simple: read the numbers on the wire bottom up. In this case, it's [1,4,2,3,5], and we are about to compare 2 and 4. In this viewpoint, every comparison is a neighboring comparison, so it describes Bubble Sort. Therefore, the example is a Bubble Sort working on the initial array of [4,1,3,2,5], and working right to left.
      The Selection Sort content is a little bit non-trivial to construct: first we list all numbers sorted, in this case [1]; then we write down the number currently on the vertical wire, in this case [2]; then we write down all number on the horizontal wire, but **top-down**, in this case [5,3,4]. The content is thus [1,2,5,3,4], and still we are about to compare 2 and 4. In this viewpoint, the "vertical wire" is the currently comparing number, while the horizontal number from top down is the rest of the array. When going down the wire, each number on horizontal wire is compared to and swapped with if necessary. This is exactly how Selection Sort works. Therefore, the example is a Selection Sort working on the initial array of [5,2,3,1,4], and working left to right.
      I know this is some strange way to assemble the numbers on the wire to make the array, but in the spirit of this video (which is talking about sorting network representation of sorting algorithms), this should be the proper way.

  • @Pedritox0953
    @Pedritox0953 4 роки тому

    Beatiful explanation!!

  • @brockobama257
    @brockobama257 3 роки тому

    Bloody brilliant mate

  • @kevnar
    @kevnar 5 років тому

    If you have to sort a series of numbers, and none are missing or repeated, the fastest sort of all is just:
    num[i] = i;

  • @kenhaley4
    @kenhaley4 8 років тому +1

    I have to disagree that the two methods are the same. If you count the number of comparisons required, we see that his diagram of little sort boxes requires n(n + 1) / 2 comparisons, which is much larger than the number of comparisons required by either insertion sort or selection sort. That's because, for example in the row analysis, the method keeps on comparing after the leftmost element has been inserted in in the proper position, as it cascades the remaining elements forward by comparing.
    There is a very interesting correspondence between his little sorting machine and the two sorting algorithms, but that doesn't make them "the same".

    • @simonmultiverse6349
      @simonmultiverse6349 3 роки тому

      There is ONE data-flow diagram in the example given, but the data-flow diagram gives no information about which piece of data is moved _when_ . The programmer can make choices about which piece of data is moved first, then which is moved second, etc. Different choices result in different algorithms.
      You could write a data-flow diagram for merge sort, and it would be a completely different diagram, allowing you to make different choices, resulting in a totally different set of possible algorithms.

  • @LudwigvanBeethoven2
    @LudwigvanBeethoven2 6 років тому

    You just showed a sorting circuit. One can implement sort at hardware lever and it can sort n items in O(1) time. Well the circuit grows bigger and bigger for larger N but for example sorting up to 64 items in constant time would be amazing. Remember the circuit sorts the whole thing in one clock cycle so it will be practically O(1)

    • @simonmultiverse6349
      @simonmultiverse6349 3 роки тому

      I'm pretty sure that in one clock cycle, a number can propagate through only ONE processing unit. (This occurs because these processing units are designed to do their worst-case job in one clock cycle.) The numbers have to propagate through four processing units in the horizontal direction and in the vertical direction. That means you will need at least seven clock cycles. Alternatively, you can find the longest path through the network, which goes through seven boxes.

  • @Ontonator
    @Ontonator 8 років тому +9

    Correct me if I'm wrong, but is that not also bubble sort?

    • @kamatchinmay
      @kamatchinmay 8 років тому +1

      Ontonator nope, bubble sort starts with the whole array, and keeps switching positions of elements that are next to each other. it requires n^2 comparison operations to be made. but selection and insertion sort both requires n^2/2 comparisons.

    • @javiergimenez40
      @javiergimenez40 8 років тому +8

      bubble sort is also n²/2 if you stop comparing the "highest" numbers

    • @deepjoshi356
      @deepjoshi356 8 років тому +1

      Ontonator bubble sort will work diagonally in the same diagram may be

    • @outputresistance
      @outputresistance 8 років тому

      Bubble sort is effectively the same as selection sort when you start the latter at the highest values. Swapping neighboring elements is just one way to get the highest one to the top. As pointed out, you can skip those that are already sorted.

    • @Ceelvain
      @Ceelvain 8 років тому +1

      You can define insertion and selection sort with "bubbles".
      You define a sorted part of the array (initially empty) and the rest is unsorted.
      For the insertion sort, you take the first element of the unsorted part and bubble it up to its position into the sorted part. And repeat until the unsorted part is empty.
      For the selection sort, you run a single bubble through the unsorted part so that the smallest reaches the sorted part.
      And for the bubble sort, you just run your bubble through the whole array/list every time.
      So you can see the insertion and selection sort as just optimizations that start or stop the bubble away from the boundaries of the array/list.

  • @goncalobaltazar
    @goncalobaltazar 8 років тому +2

    In the diagram both algorithms have the same number of comparisons. But in code you wouldn't write insertion sort like that, you stop comparing after inserting in the right place.

    • @goncalobaltazar
      @goncalobaltazar 8 років тому

      This version of insertion sort is as fast as the selection sort.

  • @arka181
    @arka181 8 років тому

    Aren't they only the same based upon the graph the sorting algorithm generates? However, this doesn't factor in the the time-complexity implications between the Selection and Insertion sort. This is where the algorithms differ, and are fundamentally more important than a simple appearance of being identical.

  • @lll-vb1sr
    @lll-vb1sr 5 років тому

    The sorting algorithm with the boxes performs bubble sort not selection or insertion sort

  • @soylentgreenb
    @soylentgreenb 8 років тому +9

    The order in which you do things matters in computing; they are not the same.
    Another common mistake is to think that getting the lowest computational complexity (e.g. O(nlogn) instead of O(n^2). For small n, insertion sort beats the everliving snot out of any O(nlogn) sorting algorithm. Many real world sorting algorithms fall back on an in-place insertion sort for small subsets and small sorting problenms.

    • @outcastorange
      @outcastorange 8 років тому +1

      Each comparator can only fire when both inputs have been provided. If you map out exactly which computations are occurring for both sequences, you'll find that in both cases, each comparator is firing exactly one time. The order is quite arbitrary and does not affect the computation time.
      There is one possible exception however, if you take a close look at the insertion algorithm. If there were a way to "check" when the leftmost bit results in the smaller number, you could skip further comparisons for that chain. Either this is the way things are already done, or the required overhead to track the leftmost bit exceeds the computational savings or resolving the entire sequence. It seems to depend on the length of the sequence, and the size of the leftmost bit relative to the full set.

  • @MasterHigure
    @MasterHigure 8 років тому +1

    How would the presentation of bubble sort (bubbling up the smallest number) differ from this presentation of selection sort? Because I really can't tell why you didn't call the first one bubble sort.

  • @OLApplin
    @OLApplin 8 років тому

    Fascinating ! Now I'll try to write that in python ...

  • @dscheme4427
    @dscheme4427 5 років тому

    "if you forget all your fancy computing" ha. It's interesting how you've done this. I sometimes have to draw diagrams to get my head around code, I'm more visual. if I can visualise it, I tend to understand it easier.

    • @Artaxerxes.
      @Artaxerxes. 4 роки тому

      It's the same for everyone

  • @Flankymanga
    @Flankymanga 8 років тому +1

    pretty fascinating!

  • @N69STHENS
    @N69STHENS 8 років тому

    The results are the same (even step by step), but the steps differs between insertion sort and selection sort.
    insertion sort doesnt sort again two number that are already sorted.
    for example, in the third step, once the "1" takes the first place, the rest of the numbers are not sorted again, they are simply shifted to the right by definition. wich is way faster.

  • @kpunkt.klaviermusik
    @kpunkt.klaviermusik 4 роки тому

    Remember the times of ZX81 when sorting 5 numbers really was time critical :-)

  • @SimonClarkstone
    @SimonClarkstone 8 років тому

    I was hoping for a mention of sorting networks, or the reasons that insertion sort is O(n) on a sorted list but selection sort must always be O(n^2).

  • @tuftman6092
    @tuftman6092 6 років тому

    but they have different properties, e.g. insertion sort has a best case of O(n) comparisons (and runs more quickly on mostly sorted lists) whereas selection sort has a best case of O(n^2) comparisons

  • @stevefrandsen7897
    @stevefrandsen7897 Рік тому

    Well presented. Thanks.

  • @squidwardfromua
    @squidwardfromua 2 роки тому

    I like how these algorithms are very similar, but insertion is ~2 times better

  • @marcushandley3017
    @marcushandley3017 8 років тому +5

    Would I be correct in saying that selection sort is O(n^2), slower than the optimal O(n log n)?

    • @VivekGawande1
      @VivekGawande1 8 років тому +3

      Marcus Handley Yes

    • @postvideo97
      @postvideo97 8 років тому

      I've always wondered where does the (n log n) come from...

    • @Conenion
      @Conenion 8 років тому +1

      On average selection sort and insertion sort are both O(n^2). Note however, that insertion sort usually performs far fewer comparisons as selection sort, while selection sort always does o(n^2) comparisons.

    • @JerehmiaBoaz
      @JerehmiaBoaz 8 років тому +2

      +postvideo97 Doing a binary search for one number (in the sorted part) is O(log n), doing it for every number in the sequence you multiply by the length of the sequence so O(n log n).

    • @postvideo97
      @postvideo97 8 років тому +1

      Ok... I understand now, it is because we are using binary search that it is (log n)... I always thought (stupidly) that those algorithms just used iterative search... As that would yield O(n^2/2)

  • @skate2late
    @skate2late 8 років тому

    So does this mean that both algorithms require the same computational resources? Is one quicker or more effiecent than the other or are they the same?

  • @GundamReviver
    @GundamReviver 8 років тому

    This is really interesting!...but i would not call them the same, in the sense of timing:
    on the one hand, you get all the numbers at once after waiting a bit longer, and on the other your numbers "trickle in"
    (in your example, that is, im not a computer programmer at all)

  • @danhorus
    @danhorus 8 років тому

    Gotta say, it was quite interesting indeed

  • @gabrielrangel956
    @gabrielrangel956 8 років тому

    Is there something formal written about this?
    I want to show it one of my professors and a formal mathematical proof of this would be quite nice.

    • @ring_raitch
      @ring_raitch 8 років тому +2

      Gabriel Rangel Topologically, the proof is in the picture. Drawn as a sorting network (a type of directed graph with functions for nodes), the two algorithms are isomorphic by construction. And isomorphic graphs are the same.
      The issue is that in coding the way one "travels" the graph, there are some tricks to optimise the row-way that can't necessarily apply to the column way, making the implementation of insertion sort sometimes faster. But mathematically/topologically, they are the same by construction.

    • @gabrielrangel956
      @gabrielrangel956 8 років тому

      Thank you

  • @kimsteinhaug
    @kimsteinhaug 8 років тому

    so what i made on the c64 was insertion sort - nice. But he is right it doesnt look that way when programming, but this presentation shows both concepts basically are the same.

    • @kimsteinhaug
      @kimsteinhaug 8 років тому

      he did not take CPU and memory or what type of data is sorted into the mix, this is alpha omega in the end when you are in real world.

  • @LeSaboteur3981
    @LeSaboteur3981 3 роки тому

    wow! amazing! thanks a lot👍👍

  • @MatheusAugustoGames
    @MatheusAugustoGames 5 років тому

    How come insertion sort is usually preferred over selection sort, if their sorting networks are the same? Does it have to do with the algorhithmization of the processes?

  • @talideon
    @talideon 8 років тому +40

    They're not the same: one is the dual of the other.

    • @soylentgreenb
      @soylentgreenb 8 років тому +6

      And computers really care anbout the order (cache coherency, load-store forwarding and other peculiarities in the way memory is accessed)

    • @shamus030
      @shamus030 8 років тому +9

      The first one wasn't even selection sort; it was more of a bubble sort. And the second one is hardly insertion sort, since he continues to compare numbers even after the insertion.

    • @ring_raitch
      @ring_raitch 8 років тому +1

      A dual is a graph such that planar faces are replaced by a node, and edges attach nodes of the dual when the original faces are adjacent.
      As drawn in the video, the two algorithms are in fact isomorphic, the only measure of "sameness" that matters topologically. The dual of the directed graph shown would look very different.

    • @SKyrim190
      @SKyrim190 8 років тому +5

      He went through the whole array and picked the smallest. It sure looks like selection sort to me.
      Bubble sort is more of "tagging along" with one element until you put it in the end of the array.

    • @talideon
      @talideon 8 років тому +6

      Thurston Sexton, the word 'dual' in mathematics has multiple meanings. The concepts of isomorphisms and duality are related, however. Go read the paper I linked to as it explains what I'm referring to very well.

  • @Johan-st4rv
    @Johan-st4rv 8 років тому +3

    I made my own Merge Sort
    One of my greatest accomplishments

  • @ifell3
    @ifell3 7 років тому

    I haven't seen that type of paper in years and years, brought back memories of a spectrum. Googled it, continuous-feed printer paper, but why was it coloured like that?? To make line reading easier??

  • @kroppyer
    @kroppyer 8 років тому

    It seems in insertion sort, some boxes/comparisons can be skipped. And indeed, when you sort a linkedlist rather than an array, you don't need to shift all other elements when inserting. This is something I don't think is possible with selection sort.

    • @andrasbiro3007
      @andrasbiro3007 8 років тому

      In selection short you don't have to insert elements, you can just swap two. Also linked lists are not cache-friendly, and therefore slower than arrays, unless you have to do a lot of insertion.

    • @kroppyer
      @kroppyer 8 років тому

      In selection sort for every iteration you need to scan all unsorted elements to find the max (or min), and then do one swap. The number of unsorted elements decreases with every iteration, hence a "triangle" of comparisons.
      In insertion sort (the variation of the video), for every iteration you need to scan all sorted elements up to a point, and shift all other sorted elements. The number of sorted elements increases with every iteration, again resulting in a "triangle" of comparisons.
      So in this case, they're doing the same thing, but in a different order. But what I'm saying is that in insertion sort you don't need to shift if you use a linked list (I agree linked lists are not cache friendly). I don't see how you can do such a thing (skip blocks) in selection sort.
      This is why I think insertion sort is really different and superior to selection sort: in selection sort the elements that are scanned are in random order, the only information you have about them is a bound on their minimum (or maximum). In insertion sort the elements that are scanned/shifted are already sorted, meaning you can even use binary search (when using an array) to find the place to insert (still an O(n²) algorithm because of the shifting).

    • @andrasbiro3007
      @andrasbiro3007 8 років тому

      kroppyer
      You are right, but what you gain on search, you lose on insert. And linked list are not suitable for binary search, at least in their simple form. And quicksort is faster then both anyway, so it doesn't really matter.

    • @kroppyer
      @kroppyer 8 років тому

      Of course quicksort and mergesort (which can be very IO efficient btw) are way better, but insertionsort can work very well on partially sorted lists, as long as you do _some_ improvement on the version used in the video. I see no use of selection sort.

    • @andrasbiro3007
      @andrasbiro3007 8 років тому

      kroppyer
      Selection sort is very easy to implement, not too inefficient, and has very little overhead. So it's probably the best for small lists.

  • @Tuchulu
    @Tuchulu 8 років тому +60

    I guess all sorting algorithms are the same at some level of abstraction.

    • @haruto9918
      @haruto9918 8 років тому +13

      what about Bogosort :D

    • @eksortso
      @eksortso 8 років тому +6

      Tuchulu They'd said at the end that they tried to write a paper to show that, but it didn't work out so easily. Indeed, think about how a quick sort works. You're comparing one number to everything else in a sequence. There isn't a simple way to use two-input two-output boxes to make that work. The only similarity is that you compare two different things and repeat that over and over again.

    • @ShobhitVashistha
      @ShobhitVashistha 8 років тому

      how would one go about implementing bogosort with abstract sorting machines? I wonder...

    • @DamianReloaded
      @DamianReloaded 8 років тому +9

      INPUT => ALGORITHM => SORTED = EVERY SORTING ALGORITHM

    • @phpn99
      @phpn99 8 років тому +2

      They're all, er... sorting ?

  • @schifoso
    @schifoso 7 років тому +1

    If this is how you teach, your students have it really easy... :-)