Im struggling to understand why m is limited to n since it does not seem to make a difference in functionality or (what I understood) runningtime. (Same amount of partitions for m=n as for m>n and the algorithm terminates after finding at most n Hull points anyway) Could you provide some example how m > n could bring drawbacks?
You are right: strictly speaking limiting m to n is not necessary. For m>n the algorithm would do exactly the same as for m=n. But the algorithm is formulated as it is, because there is also no point in using an m>n. I'd say the exact formulation boils down to a matter of taste. The original formulation by Chan bounds m to n, on the slides we follow this formulation.
While I was looking at the timings of Convex Hull computation times using the monoton chain algorithm on a lexicographically sorted integer points in an intel Xeon 2.40 GHz × 32 machine, I observed that for the time taken for 1000 points, 10000 and 100000 points are all very close, Is this OK?
This looks a bit surprising. For very small point sets it might be difficult to get reasonable measurements, and possibly some setup costs obscure the running times. But the difference between 10000 and 100000 should be notable. The running time will consist of two components: sorting + the main algorithm. Asymptotically, sorting dominates. But if you use the sorting algorithm provided with the programming language, it will be very fast. So you should be able to observe a close to linear running time. Maybe, if you try 100000, 200000, 300000, the linear increase in running time will be better visible.
@@jomsantony2550 The remaining part of the algorithm should simply be linear in the input size (and for a given input size, deviation should be at most by a factor 2 depending on how many points are on the hull), so this is a bit surprising. As I said, I would check larger input sizes, since there setup costs, small deviations etc will matter less, so it should be easier to see the linear-time behaviour.
@@algorithmslab Oh yes you are right there was an initial filtering which I forgot to take into account, The actual number of input to the algo depends on filtering
@@algorithmslab If the polygon is also given to be x-monoton, can we devise a simple algo to find the CH, like while proceeding in CCW ordering starting from say bottom most point, proceed for left turns and whenever right turns are identified the previous vertex will be concave, delete the same and continue.. and hopefully these deletions wont get chained as the polygon is monoton, Is this a right conclusion. This is also O(N) right?
@@jomsantony2550 If the polygon is x-monotone, you can simply use the Graham scan as presented in the video. It takes O(n) time after sorting the points in x-order. You can save a little bit: between the left- and right-most point you have two chains, an upper and a lower chain (and both are x-monotone). You only need to consider the upper chain when computing the upper hull, and the lower chain when computing the lower hull. This is, I think, as simple as it gets. I am not sure what you mean by "deletions wont get chained"; the trick why Graham scan is O(n) is that even if you have several deletions after each other, the total number of deletions is less than n.
@@algorithmslab Thank you for your prompt responses, I am exploring a parallel filtering technique followed by convex hull computation in 2D using a GPU, in that process a close approximate polygon which contains only x-monoton and y-monoton chains enclosing most of the points are obtained(if Iam not wrong!), So I thought of finding the CH of this polygon and remove all points inside it and recurse.. So just thinking of how to get the CH in this special case in O(p) where p is vertices in the polygon...I am also looking at the scope of using a similar idea for 3D convex hull. Can you give some guidance if possible!
Are there applications where the input point set consists of millions of points. Will it be worth exploring GPU accelerated methods for finding the convex hull of such input sets
Sorry, I must have missed your comment earlier. Sure. There are quite a few papers on parallel convex hull computation. There are things that one can do very easily. E.g., a heuristic in convex hull computations is to first compute the hull of a small subset and then to get rid of all points that lie in that convex hull before continuing. That check can be easily parallelized. Divide&Conquer algorithms, where we first compute convex hulls of small subsets, also make a good starting point for a parallel algorithm. A very nice recent paper on randomized incremental construction and parallelization is the following: ua-cam.com/video/8NZ-2MjjlV0/v-deo.html
Yes, indeed. I am aware that we are very brief about this in the video (ua-cam.com/video/fTqPVjy0rzU/v-deo.html). But after reversing the order, it is exactly the same algorithm again. E.g. from our notebook (see nbviewer.org/github/Jerik79/geoalg-notebooks/blob/master/notebooks/01-ConvexHull.ipynb): points.sort(key = lambda p: (p.x, p.y)) upper_hull = single_graham_scan(points) points.reverse() lower_hull = single_graham_scan(points) return upper_hull + lower_hull[1:-1]
No, the convex hull carries not enough information. Imagine you have a point set for which you want to compute the Delaunay triangulation. Additionally, I give you three points that form a huge triangle covering the point set (as in the randomized incremental construction of the Delaunay triangulation). The convex hull of the combined point set is simple the huge triangle. This however contains no information about the Delaunay triangulation of the original point set.
Excellent video, really well explained, with clear examples. Thank you.
Im struggling to understand why m is limited to n since it does not seem to make a difference in functionality or (what I understood) runningtime.
(Same amount of partitions for m=n as for m>n and the algorithm terminates after finding at most n Hull points anyway)
Could you provide some example how m > n could bring drawbacks?
You are right: strictly speaking limiting m to n is not necessary. For m>n the algorithm would do exactly the same as for m=n. But the algorithm is formulated as it is, because there is also no point in using an m>n. I'd say the exact formulation boils down to a matter of taste. The original formulation by Chan bounds m to n, on the slides we follow this formulation.
@ Thank you for the confirmation :)
While I was looking at the timings of Convex Hull computation times using the monoton chain algorithm on a lexicographically sorted integer points in an intel Xeon 2.40 GHz × 32 machine, I observed that for the time taken for 1000 points, 10000 and 100000 points are all very close, Is this OK?
This looks a bit surprising. For very small point sets it might be difficult to get reasonable measurements, and possibly some setup costs obscure the running times. But the difference between 10000 and 100000 should be notable. The running time will consist of two components: sorting + the main algorithm. Asymptotically, sorting dominates. But if you use the sorting algorithm provided with the programming language, it will be very fast. So you should be able to observe a close to linear running time. Maybe, if you try 100000, 200000, 300000, the linear increase in running time will be better visible.
@@algorithmslab The input set itself was sorted, So I used the the Monton chain algo by commenting the sort procedure,
@@jomsantony2550 The remaining part of the algorithm should simply be linear in the input size (and for a given input size, deviation should be at most by a factor 2 depending on how many points are on the hull), so this is a bit surprising. As I said, I would check larger input sizes, since there setup costs, small deviations etc will matter less, so it should be easier to see the linear-time behaviour.
@@algorithmslab Oh yes you are right there was an initial filtering which I forgot to take into account, The actual number of input to the algo depends on filtering
Given the CCW ordering of a simple polygon with N points , is possible to compute its convex hull in O(N)..
Yes, this is possible, but it is a bit tricky. See here: en.wikipedia.org/wiki/Convex_hull_of_a_simple_polygon
@@algorithmslab Thank you
@@algorithmslab If the polygon is also given to be x-monoton, can we devise a simple algo to find the CH, like while proceeding in CCW ordering starting from say bottom most point, proceed for left turns and whenever right turns are identified the previous vertex will be concave, delete the same and continue.. and hopefully these deletions wont get chained as the polygon is monoton, Is this a right conclusion. This is also O(N) right?
@@jomsantony2550 If the polygon is x-monotone, you can simply use the Graham scan as presented in the video. It takes O(n) time after sorting the points in x-order. You can save a little bit: between the left- and right-most point you have two chains, an upper and a lower chain (and both are x-monotone). You only need to consider the upper chain when computing the upper hull, and the lower chain when computing the lower hull. This is, I think, as simple as it gets. I am not sure what you mean by "deletions wont get chained"; the trick why Graham scan is O(n) is that even if you have several deletions after each other, the total number of deletions is less than n.
@@algorithmslab Thank you for your prompt responses, I am exploring a parallel filtering technique followed by convex hull computation in 2D using a GPU, in that process a close approximate polygon which contains only x-monoton and y-monoton chains enclosing most of the points are obtained(if Iam not wrong!), So I thought of finding the CH of this polygon and remove all points inside it and recurse.. So just thinking of how to get the CH in this special case in O(p) where p is vertices in the polygon...I am also looking at the scope of using a similar idea for 3D convex hull. Can you give some guidance if possible!
Are there applications where the input point set consists of millions of points. Will it be worth exploring GPU accelerated methods for finding the convex hull of such input sets
Sorry, I must have missed your comment earlier. Sure. There are quite a few papers on parallel convex hull computation. There are things that one can do very easily. E.g., a heuristic in convex hull computations is to first compute the hull of a small subset and then to get rid of all points that lie in that convex hull before continuing. That check can be easily parallelized. Divide&Conquer algorithms, where we first compute convex hulls of small subsets, also make a good starting point for a parallel algorithm. A very nice recent paper on randomized incremental construction and parallelization is the following: ua-cam.com/video/8NZ-2MjjlV0/v-deo.html
@@algorithmslab Thank You
for Graham scan we need to reverse the order of P and then compute the lower hull too right?
Yes, indeed. I am aware that we are very brief about this in the video (ua-cam.com/video/fTqPVjy0rzU/v-deo.html). But after reversing the order, it is exactly the same algorithm again. E.g. from our notebook (see nbviewer.org/github/Jerik79/geoalg-notebooks/blob/master/notebooks/01-ConvexHull.ipynb):
points.sort(key = lambda p: (p.x, p.y))
upper_hull = single_graham_scan(points)
points.reverse()
lower_hull = single_graham_scan(points)
return upper_hull + lower_hull[1:-1]
Given the convex hull of a set of points Is it possible to come up with its Delaunay triangulation with less than nlogn time
No, the convex hull carries not enough information. Imagine you have a point set for which you want to compute the Delaunay triangulation. Additionally, I give you three points that form a huge triangle covering the point set (as in the randomized incremental construction of the Delaunay triangulation). The convex hull of the combined point set is simple the huge triangle. This however contains no information about the Delaunay triangulation of the original point set.
@@algorithmslab Thank you
Thanks a lot !😇
You are welcome!
Thank you