I use both techniques, but most of the time I use limit/offset because the need for paginating to a specific page, I use cursor technique for paginating notifications and tasks and similar things. Thanks Aaron! that was an amazing video as usual.
I don't even use SQL in my day to day (not in our tech stack atm), but you explain so many tangential/incidental concepts so well that they're still extremely useful to me. And very entertaining! @@PlanetScale
Im about to add pagination to an old report, and at first I was going with limit offset, but after watching this video I know it makes a lot more sense to just use cursor. Thanks for such an organized and well thought out video.🎉🎉
Paginate vs Paginate is the next great battle in tech. Great deep dive on the differences between the two paginate (paginate?) methods that give a really good framework to help decide when you might reach for each one. A++++ would buy again.
If anyone is wondering why encode the last id to base64 here is the reason. Javascript can handle number to a certain limit after that it wont work so the solution to that is just to encode your id and send it as a string and decode it to a number on client. If you have a cursor which wont grow more that than the maximum available number then you can directly use numbers instead of encoding it and adding a overhead. This was started by facebook ( now meta) for the very same reason
Also, there is a security catch here! Implementing this carelessly without enough validation of the base64 encoded value leaves ways for SQL Injection.
@@DomatoZauseisn't there also a security issue for information leakage. Anyone can decode the base64 string and see at least some of your (probably not very interesting) column names. Might be worth encrypting things for bonus security points.
@@chrishwheeler my thoughts on this is that if you have taken necessary security measures for your application you don't need to worry here. Base64 was used for sending data not encrypting it and increasing security no. It's just a wrapper that is convinent.
Great article, I've only used offset pagination, as a web developer I can't see much use cases of just prev/next navigation, unless you have 2 or 3 pages, otherwise you want the user to have the control to navigate a bit close or far or even to the end, and this includes specific pages navigation, left and right, start and end. Otherwise for cursor page navigation the infinite scrolling seems a much more real world option and for that would make sense to use it.
I learned this some years ago when I had to process all items in a table with many millions of rows. I used LIMIT 1000 and OFFSET (Laravel chunks). In the beginning everything was fine and fast, but as time goes on, the SELECT query became slower and slower, and the DB load higher and higher. So I cancelled the processing job to investigate the problem and changed the query to something like WHERE id >= i*1000 AND id < (i+1)*1000 and every query was fast again.
Thanks for this video. Really helpful. I was wondering: 1. At what point does offset pagination become unfeasible in terms of perf? 1m records, 2m records? Usually though you would surely first reduce the initial result set scoped to the user (but not for e.g an internal admin dashboard). 2. With cursors. This operates under the assumption that ID is an incrementing integer right? If an application uses uuid as primary keys, would that work too? Or do you need another unique auto increment column to have that precision? 3. Pagination backwards with cursors: I would assume the previous page token would need to either be kept track of or is there a way to reference only the first record in the set as the ‘end’ point of the previous cursor and limit it to ten before that?
I find that limit/offset is only ever a requirement when the PO or PM or whoever is designing the UI blindly assumes that clicking a list of page numbers is the simplest solution rather than specifying the requirements and leaving the implementation up to the developer. If you tell them that a virtualised list/infinite scrolling is just as easy and works much better they are usually more than happy to use it, they just assumed it would be complicated or didn't even think of it. Most of the time when the list is very long what you really need is just searching and filtering, and then you can use the continuation token to page the result as the user scrolls. If there are still a lot of results then the user can tighten their search. No user ever wants to go straight to page 27.
I had a moment too until I wrapped my head around it. It's important that they are ordered by first_name and then by id. If you search for all users, where first_name='Aaron' and id > 10, then you will get all Aarons below #10 Aaron, but no Borises. Therefore you have to include them too, by searchin for first_name > 'Aaron'. Since first_name is sorted in ascending order, everything greater than 'Aaron' is just everything below, i.e. Borises, Cedrics, Dorotheas and therefore you can use the greater than operator on a string.
8:25 my question that i figured out the answer to: why does a cursor need to include the extra/useless data of the name? id is unique and is sortable, so i would think all you need to know for the cursor is that it's after that id (if id is in the ordering) since the rest of the ordering is handled by the rest of the SQL statement. my answer: it's possible that there's someone named Bobby that has the id 20000, which is lower than the Aaron's id of 25995, so you only need to care about the id when the name is the same, otherwise anything after the name "Aaron" is fair game
Basically any column or combination of columns that is guaranteed to be unique is deterministic. The Id was just used as an example bcs it's guaranteed to be unique... well it better be 😅
Great video!!! I recently found a way to greatly simplify the cursor pagination query for sorting by two or more columns. The trick is to use tuples for comparison. I'm curious if there are any trade-offs that I should be aware of
Could you please explain what do you mean by "Scrolling and Throwing away that offset data". I believe you are talking at memory level things. It would be so helpful if you explain and show how pagination works at disk level,how pagination queries select data from disk partitiones stored in Binary tree format.
Hmm I'm not sure how directly addressable pages via cursor could work, except by manually paging through one by one via cursor. Which kind of defeats the purpose!
@@PlanetScale you can get difference between current page and target page. And make separate/sub query for only id and add it to your main query. Sure it's kinda like going through all pages in between at once, but with key-only selection it should be performant enough. And it's way better than force user to scroll n pages manually :) For going backward - you can just reverse order of your id query and id condition. That way you can still get all benefits of cursor approach and also allow fast records travel. Downsides - pages will not be the same for everyone and more complicated implementation.
i wish we had materialize cte inside a function that has an expiry time. so we can have more complex query.. instead we pass in a refresh key to the function to request between new record or retrieve from cache sort of like session tokens
Interesting explanation, thanks for this! But, what if there are no incremental ids in the db? How would cursor based pagination work if the primary key is an uuid and there's no serial number available?
@@codingbyte4529 true, but the explanation relied heavily on the sequential nature of the id column: 'id>2500'. This behavior is not replicated with uuids.
If you want to use cursors but also support jumping to pages, couldn't you conditionally use cursors or offsets? You could use a cursor by default, but if a user wants to jump to page x, then you can drop down to offsets. Then use an offset if the user navigates to the next page (or previous page if you provide both a next and previous cursor).
There's a third way - to maintain an 'index' table that references records in the original table. Unlike original table it must have no 'holes' -- ids must be strictly sequential. This gives you the flexibility to quickly jump to the page of interest using BETWEEN index1 and index2. This solution is not perfect and acts much like the limit/offset solution, but it's much faster to access but harder and slower to update and maintain. The advantages are that it's much faster than limit and offset while letting you access random pages. The disadvantages are that it is still prone to 'drifting' and harder to implement.
That's an interesting technique for sure. That reminds me of the deferred join technique where you paginate a subquery of IDs only, and then join those IDs back to the table to get the full rows. Kinda similar, but much more flexible!
I'm not sure what kind of benefit this has over just using an index and an offset for your cursor. id > page * itemsPerPage. As long as you know how your index is sorted it shouldn't be a problem, if i'm not missing something.
@@disinfect777 Regular ids usually have 'holes' because an item can be deleted, disabled etc. For this reason you cannot rely on your surrogate id for direc pagination (id = pagenum × pagesize) and you need additional 'rank' instead which gives you the guarantee that it is always sequential and without holes. Once you have this guarantee, you can retrieve the required records using the BETWEEN opearator which is very fast - faster than offset and cursor (I think).
@@sergeibatiuk3468 Ok yeah i see your point if you want to jump to specific pages. I do think it's better to just have a next/prev or even infinite scroll. I can't see many using that feature.
Okay, I will say it, order by and limit are a solution for noobs, the cleanest solution has been window functions for now 4 years if using MySQL, 6 years using MariaDB, and for more than a decade with PostgreSQL, Oracle, and MS SQL Server.
Awesome explanation, thanks. But in the limit-offset method, you mention that all records before the offset are discarded, hence making it less performant. I'm not sure why the database can't just skip over those records in the first place, similar to how it skips over records in cursor pagination. Could you please explain that part? Thanks
I think that it's able to search by conditions much faster because it could perform a binary search, which has a reduced complexity of log(n). When skipping through records this is not possible because it needs to count how many it's skipping, which means that it reads n records.
I use both but don't what they called. But don't know performance difference. Still thinking best way to get total count() should i run a another query or something else.
Getting total count on a huge table is tough no matter what. But yes, you'd have to run an entirely separate query to get the total records and therefore total number of pages.
If you're paging forward and a record is inserted behind the cursor, it won't throw off further pages, but you will have missed that record. Paging backwards you will see it. No method is totally resistant to shifting records, but cursor is more resilient
How would u implement cursor pagination if your id‘s are uuid? My first thought is using an createdAt Column instead of the id. Is that a good practice or is there a better solution?
You can just use the uuid anywhere I write id. Works the same way! Doesn't matter if the id is an int or string, as long as it's unique. Adding the additional id (or uuid) is only there to make the sort stable.
You would need to send out a token to the frontend that represents the first item in the page. If you look at Stripe's api for example, they usually have a next_page and prev_page token. Exact same idea as the video, just with the first record instead of the last!
But the cursor only works, if the ID is an integer. How about UUID as primary key? This way even offset and cursors are unusable, if you delete data. What’s the best practice for this usecase?
When using cursor and multiple columns, I dont understand why you'd need to query both the first_name and the id. Since the id guarantees deterministic ordering, looking for id > 25995 is enough to get the same order you had before (assuming same order by, of course), first_name = Aaron or first_name > Aaron seems redundant
Well I think by the time a table gets so big that the performance of offset will be a major problem, it will also be a major pain to not be able to jump to page one gazillion in an instant?
I have 1 question, what if my where clause is super complex and I need to reduce my data universe a lot , cursor will need to reduce the universe and then apply the where for the cursor like offset , this is not more complex or equal ?
@@Frexuz adding the ID (or uuid) is not semantic, it only provides determinism so it doesn't matter if it's sequential. We're merely adding it to provide a stable sort
Always implemented cursor pagination manually. Going backwards is a pain! But doable, and you can always "fake" the page number on the client side. It's purely cosmetic Edit: skipping pages too, as mentioned later in the video. You can totally do it, I would just rather not haha. Always a relative jump with respect to the current cursor. And often times you still need to fallback on offset/limit if the user provides no cursor, or only a page index
There is a security catch here (in the base64 encoding value method)! Implementing it without enough validation of the base64 encoded value leaves a way to SQL Injection.
I use both techniques, but most of the time I use limit/offset because the need for paginating to a specific page, I use cursor technique for paginating notifications and tasks and similar things.
Thanks Aaron! that was an amazing video as usual.
Totally makes sense to mix and match where appropriate! And thanks for the kind words 🤗
Your content is fenomenal. Explanation and visualization is great. One of the best DB channels out there 👏
Thank you so much!
I don't even use SQL in my day to day (not in our tech stack atm), but you explain so many tangential/incidental concepts so well that they're still extremely useful to me. And very entertaining! @@PlanetScale
Cursor pagination seems super useful in scenarios, where you have:
a) infinite scroll pagination
b) a table that updates in real time
c) both
Im about to add pagination to an old report, and at first I was going with limit offset, but after watching this video I know it makes a lot more sense to just use cursor. Thanks for such an organized and well thought out video.🎉🎉
I have been working with MySQL for last 17 years and I never use cursor but your video helped me to understand MySQL cursor. Thank you.
I’m always actually learning stuff I thought I already knew with you guys. Great content!
You are the first person to explain this in a way I understand, while still being very technical and using SQL examples.
Paginate vs Paginate is the next great battle in tech. Great deep dive on the differences between the two paginate (paginate?) methods that give a really good framework to help decide when you might reach for each one.
A++++ would buy again.
Ha I read each "paginate" in your comment differently 😂👏
A A Ron?
If anyone is wondering why encode the last id to base64 here is the reason. Javascript can handle number to a certain limit after that it wont work so the solution to that is just to encode your id and send it as a string and decode it to a number on client. If you have a cursor which wont grow more that than the maximum available number then you can directly use numbers instead of encoding it and adding a overhead. This was started by facebook ( now meta) for the very same reason
Yep, base64 is extremely portable! And it's great for wrapping up multiple values into a single string.
Also, there is a security catch here! Implementing this carelessly without enough validation of the base64 encoded value leaves ways for SQL Injection.
@@DomatoZauseisn't there also a security issue for information leakage. Anyone can decode the base64 string and see at least some of your (probably not very interesting) column names. Might be worth encrypting things for bonus security points.
@@chrishwheeler my thoughts on this is that if you have taken necessary security measures for your application you don't need to worry here. Base64 was used for sending data not encrypting it and increasing security no. It's just a wrapper that is convinent.
@@rampandey191exactly my thoughts
I've been using paginate, chunk and cursor in Laravel, but only LIMIT OFFSET in MySQL. This is a great video! Thanks!
Great article, I've only used offset pagination, as a web developer I can't see much use cases of just prev/next navigation, unless you have 2 or 3 pages, otherwise you want the user to have the control to navigate a bit close or far or even to the end, and this includes specific pages navigation, left and right, start and end. Otherwise for cursor page navigation the infinite scrolling seems a much more real world option and for that would make sense to use it.
this is an excellent description of the differences between the two methods, thank you for putting this together!
Ive been looking at articles for cursor based pagination, this has been the most informative in actually implementing it. Thank you
that was really informative. keep up the great content. i feel like db subjects are not really talked about. so this channel is a blessing
Beautifully explained! I really appreciate it
I learned this some years ago when I had to process all items in a table with many millions of rows. I used LIMIT 1000 and OFFSET (Laravel chunks). In the beginning everything was fine and fast, but as time goes on, the SELECT query became slower and slower, and the DB load higher and higher. So I cancelled the processing job to investigate the problem and changed the query to something like WHERE id >= i*1000 AND id < (i+1)*1000 and every query was fast again.
how do you implement that in client side?
Really clear and not sloppy, I like the way you demonstrate thing to people and it's really helpful! Thanks for such great content!
a very easy to understand explanation! thank you
This video is insanely timely, great explanation, thanks!!
Thanks for this video. Really helpful. I was wondering:
1. At what point does offset pagination become unfeasible in terms of perf? 1m records, 2m records? Usually though you would surely first reduce the initial result set scoped to the user (but not for e.g an internal admin dashboard).
2. With cursors. This operates under the assumption that ID is an incrementing integer right? If an application uses uuid as primary keys, would that work too? Or do you need another unique auto increment column to have that precision?
3. Pagination backwards with cursors: I would assume the previous page token would need to either be kept track of or is there a way to reference only the first record in the set as the ‘end’ point of the previous cursor and limit it to ten before that?
I hardly even subscribe to channels your content is sooooooo soooo well created.
keep up the good work.
That means a lot, thank you!
great video. learning from youtube is amazing BUT its all about finding the right video. this one has 58k views. should be way higher
You're raising the bar with each and every video, Aaron. This is not fair!
I find that limit/offset is only ever a requirement when the PO or PM or whoever is designing the UI blindly assumes that clicking a list of page numbers is the simplest solution rather than specifying the requirements and leaving the implementation up to the developer.
If you tell them that a virtualised list/infinite scrolling is just as easy and works much better they are usually more than happy to use it, they just assumed it would be complicated or didn't even think of it.
Most of the time when the list is very long what you really need is just searching and filtering, and then you can use the continuation token to page the result as the user scrolls. If there are still a lot of results then the user can tighten their search. No user ever wants to go straight to page 27.
You'd be surprised what users want to do!
i didn't get the part where first_name > 'Aaron', fantastic video by the way
I had a moment too until I wrapped my head around it. It's important that they are ordered by first_name and then by id. If you search for all users, where first_name='Aaron' and id > 10, then you will get all Aarons below #10 Aaron, but no Borises. Therefore you have to include them too, by searchin for first_name > 'Aaron'. Since first_name is sorted in ascending order, everything greater than 'Aaron' is just everything below, i.e. Borises, Cedrics, Dorotheas and therefore you can use the greater than operator on a string.
I have no idea what MySQL is, but I found this video because I was also unsure about how to pronounce 'paginate' and 'pagination'. Thanks!
8:25 my question that i figured out the answer to: why does a cursor need to include the extra/useless data of the name? id is unique and is sortable, so i would think all you need to know for the cursor is that it's after that id (if id is in the ordering) since the rest of the ordering is handled by the rest of the SQL statement.
my answer: it's possible that there's someone named Bobby that has the id 20000, which is lower than the Aaron's id of 25995, so you only need to care about the id when the name is the same, otherwise anything after the name "Aaron" is fair game
I learnt a new thing today! Thank you for the excellent teaching!
very cool content and friendly explain, thank you
What about the case where the id are not serial type, what if it is uuid, how would fhe cursor where clause work?
As long as your order is deterministic, you're good! Ordering by UUID is deterministic, because they are unique.
Basically any column or combination of columns that is guaranteed to be unique is deterministic. The Id was just used as an example bcs it's guaranteed to be unique... well it better be 😅
This was very nice an d compact lesson, thanks mate
great presentation, well done
I am not a db guy, but was soooooooo good. So many thanks for the video.
that is such a good video! thanks!
It's such a great show.
I enjoy it more than then education content.
Good job PlanetScale.
Awesome! I haven’t heard anyone else explain SQL as well as you do. Do you have a course on this stuff?
Thank you! And boy do I. It's totally free, too: planetscale.com/mysql
Great video!!!
I recently found a way to greatly simplify the cursor pagination query for sorting by two or more columns.
The trick is to use tuples for comparison.
I'm curious if there are any trade-offs that I should be aware of
pagination for dummies, thanks a lot ;-)
What are you using for the visualizations? Manim perhaps? They are stunning!
Don't tell anyone but it's Apple Keynote 🤐
Could you please explain what do you mean by "Scrolling and Throwing away that offset data". I believe you are talking at memory level things. It would be so helpful if you explain and show how pagination works at disk level,how pagination queries select data from disk partitiones stored in Binary tree format.
I’m don’t usually leave comments but this is actually really good stuff. Loved your php vid and love this as well. Subbed.
I appreciate it! Thank you
Can you use offset / cursor pagination when using indexes? A good example would be when using geospatial queries?
Bro I like you more than PlanetScale 😂
Haha I appreciate that but I'll keep working to make it even 😂
Great video! Thank you!
Really great video !
Definitely not more than i wanted to know! Very helpful.
Awesome content better than theory i learn in college
Well that's wonderful to hear, thank you!
Super helpful explanation and visuals! Thank you!
Great video, thanks!
With cursor you can still use pages, you just have to go a bit further with finding which record will be first.
Hmm I'm not sure how directly addressable pages via cursor could work, except by manually paging through one by one via cursor. Which kind of defeats the purpose!
@@PlanetScale you can get difference between current page and target page. And make separate/sub query for only id and add it to your main query. Sure it's kinda like going through all pages in between at once, but with key-only selection it should be performant enough. And it's way better than force user to scroll n pages manually :) For going backward - you can just reverse order of your id query and id condition.
That way you can still get all benefits of cursor approach and also allow fast records travel. Downsides - pages will not be the same for everyone and more complicated implementation.
Explain how it would work cursor pagination when you have the Application ordering múltiple and different columns
How about using UUID as the row's identifier?
That works! No difference at all. Adding the ID is purely to achieve a deterministic result. Since UUIDs are unique, that's totally fine
@@PlanetScale You used the arithmetic comparison operator (greater than), that works with UUID either?
i wish we had materialize cte inside a function that has an expiry time. so we can have more complex query.. instead we pass in a refresh key to the function to request between new record or retrieve from cache sort of like session tokens
Awesome explanation! Thanks!
AWESOME! thank you
I am so glad I've stumbled upon this video. Let's see what you got here! Subscribed!
We shall not let you down 🫡
I was trying to google how to pronounce paginate when I clicked on this LOL
Interesting explanation, thanks for this!
But, what if there are no incremental ids in the db? How would cursor based pagination work if the primary key is an uuid and there's no serial number available?
UUIDs can be ordered as well, like the first name for example
@@codingbyte4529 true, but the explanation relied heavily on the sequential nature of the id column: 'id>2500'. This behavior is not replicated with uuids.
You can order by anything you want, UUID included, as long as it's deterministic. No problem!
Yes, cursor pagination is great but it gets more complicated and controversial when we talk about UUID as primary key in the database table.
Works just the same with UUIDs. As long as you have deterministic ordering, it works the same 🤗
@@PlanetScalewhat about random strings? Is it the same with uuid and can be used with cursor?
@@PlanetScaleI’m using cuid as the primary key but have no other columns that have deterministic ordering. Is my only choice to use offset limit?
@@PTBKoo use a timestamp column with your PK then
If you want to use cursors but also support jumping to pages, couldn't
you conditionally use cursors or offsets? You could use a cursor by default, but if a user wants to jump to page x, then you can drop down to offsets. Then use an offset if the user navigates to the next page (or previous page if you provide both a next and previous cursor).
There's a third way - to maintain an 'index' table that references records in the original table. Unlike original table it must have no 'holes' -- ids must be strictly sequential. This gives you the flexibility to quickly jump to the page of interest using BETWEEN index1 and index2. This solution is not perfect and acts much like the limit/offset solution, but it's much faster to access but harder and slower to update and maintain. The advantages are that it's much faster than limit and offset while letting you access random pages. The disadvantages are that it is still prone to 'drifting' and harder to implement.
That's an interesting technique for sure. That reminds me of the deferred join technique where you paginate a subquery of IDs only, and then join those IDs back to the table to get the full rows. Kinda similar, but much more flexible!
I'm not sure what kind of benefit this has over just using an index and an offset for your cursor. id > page * itemsPerPage. As long as you know how your index is sorted it shouldn't be a problem, if i'm not missing something.
@@disinfect777 Regular ids usually have 'holes' because an item can be deleted, disabled etc. For this reason you cannot rely on your surrogate id for direc pagination (id = pagenum × pagesize) and you need additional 'rank' instead which gives you the guarantee that it is always sequential and without holes. Once you have this guarantee, you can retrieve the required records using the BETWEEN opearator which is very fast - faster than offset and cursor (I think).
@@sergeibatiuk3468 Ok yeah i see your point if you want to jump to specific pages. I do think it's better to just have a next/prev or even infinite scroll. I can't see many using that feature.
@@disinfect777 Not sure about how many need this -- I did not conduct an investigation -- I had this specific requirement and it worked for me.
Could you do more content on DB transactions and LOCK FOR UPDATE & CO?
Good idea, I'll add it to my list!
What if an entry has been added before your page?
Depends on the method! With limit/offset you'll see a duplicate record on the next page. With cursor you'll be fine
this is such a good video
Great content!
Okay, I will say it, order by and limit are a solution for noobs, the cleanest solution has been window functions for now 4 years if using MySQL, 6 years using MariaDB, and for more than a decade with PostgreSQL, Oracle, and MS SQL Server.
00:38!
Awesome explanation, thanks. But in the limit-offset method, you mention that all records before the offset are discarded, hence making it less performant. I'm not sure why the database can't just skip over those records in the first place, similar to how it skips over records in cursor pagination. Could you please explain that part? Thanks
I think that it's able to search by conditions much faster because it could perform a binary search, which has a reduced complexity of log(n). When skipping through records this is not possible because it needs to count how many it's skipping, which means that it reads n records.
That makes a lot of sense thanks@@MrJellekeulemans
/api/user?sort=age
In these scenario tabel get unordered then how cursorbase pagination work?
If PlanetScale ever establishes a presence in South Africa I would surely migrate over
I use both but don't what they called. But don't know performance difference. Still thinking best way to get total count() should i run a another query or something else.
Getting total count on a huge table is tough no matter what. But yes, you'd have to run an entirely separate query to get the total records and therefore total number of pages.
How does cursor pagination handle insertion of records before and after the cursor?
If you're paging forward and a record is inserted behind the cursor, it won't throw off further pages, but you will have missed that record. Paging backwards you will see it. No method is totally resistant to shifting records, but cursor is more resilient
I really love the animations you had in those videos. Do you mind sharing what software you use for making these animations. I really appreciate it. 👍
These were made in Keynote, if you can believe it
@@PlanetScale that’s incredible 😮. Thanks for sharing. I really enjoyed your contents. 👍
Btw cursor navigation is commonly called keyset navigation, which avoids confusion with db cursors.
Awesome content Great
How would u implement cursor pagination if your id‘s are uuid? My first thought is using an createdAt Column instead of the id. Is that a good practice or is there a better solution?
You can just use the uuid anywhere I write id. Works the same way! Doesn't matter if the id is an int or string, as long as it's unique. Adding the additional id (or uuid) is only there to make the sort stable.
How would you go about reversing order when using a cursor-based approach? I.e. you've got to page 3 and then want to retreat back to page 2?
You would need to send out a token to the frontend that represents the first item in the page. If you look at Stripe's api for example, they usually have a next_page and prev_page token. Exact same idea as the video, just with the first record instead of the last!
But the cursor only works, if the ID is an integer. How about UUID as primary key? This way even offset and cursors are unusable, if you delete data. What’s the best practice for this usecase?
The cursor still works even with a UUID. As long as ordering is deterministic, you can use whatever columns you want!
When using cursor and multiple columns, I dont understand why you'd need to query both the first_name and the id. Since the id guarantees deterministic ordering, looking for id > 25995 is enough to get the same order you had before (assuming same order by, of course), first_name = Aaron or first_name > Aaron seems redundant
This is f-ing gold !!
Hi Aaron, What kind of mysql user interface do you use? im trying to install that
Table Plus!
Dude, what about when you have UUID as primary key, since it's not numeric I think it's not applicable?
Loved it
what about using cursor and uuids as a id?
Works just the same! Anything you put in your `order by` statement needs to be in your cursor.
Well I think by the time a table gets so big that the performance of offset will be a major problem, it will also be a major pain to not be able to jump to page one gazillion in an instant?
Check out the deferred join technique, linked below the video! It's helpful for offset pagination with huge tables
@@PlanetScalewow great advice thank you!
I have 1 question, what if my where clause is super complex and I need to reduce my data universe a lot , cursor will need to reduce the universe and then apply the where for the cursor like offset , this is not more complex or equal ?
What about uuid? Can't do greater_than
UUIDs are ordered deterministically, because they are unique. So you can do greater than
@@PlanetScale well, deterministically, but not sequencially
@@Frexuz adding the ID (or uuid) is not semantic, it only provides determinism so it doesn't matter if it's sequential. We're merely adding it to provide a stable sort
One more sub. Have a nice evening :)
Does anyone know what software or technique he uses to produce animation like this?
It's Apple Keynote 🤐 Don't tell anyone
Good stuff as usual 🎉😊
This is awesome
You will never get me to stop saying "page"-inate.
although cursor sounds great after watching this video, I can't find a suitable UI/UX to use it
I think GitHub uses it to page through commits. I think stripe uses it to page their API. Anything that only requires next/prev works!
Always implemented cursor pagination manually. Going backwards is a pain! But doable, and you can always "fake" the page number on the client side. It's purely cosmetic
Edit: skipping pages too, as mentioned later in the video. You can totally do it, I would just rather not haha. Always a relative jump with respect to the current cursor. And often times you still need to fallback on offset/limit if the user provides no cursor, or only a page index
TL;DR offset is more suitable for pagination and cursor for infinite scrolling
Yes but we need that sweet, sweet average watch time so everyone please watch the video
cursor/keyset is used for api pagination. if you run a query on youtube's api, you are getting a next and previous page token instead of page numbers.
I use limit and offset because I'm not deleting records in the database. never did lol
It's not only about deleting, but also updating. But use whatever works!
wait. so I've pronounced paginate wrong the whole time?
you and me both
I am always having to lookup both pronunciation and spelling.
There is a security catch here (in the base64 encoding value method)! Implementing it without enough validation of the base64 encoded value leaves a way to SQL Injection.
Yes, whenever you put user input into your query you need to use parameter binding, a query builder, an ORM, or similar methods.
One more issue, id is most of the times uuid
with limit offset its better to run again the query to fetch the current page updated data after any modification or update?