Thanks for sharing this with us. I was thinking that the limit was the number of rows per page, but now I completely understand this feature. Thanks again!
At timestamp 10:05 you mention you will show how you are parsing out to get the skip token value, but you don’t show it. For the action ‘update skip token variable’, what expression are you using for the value field? I’ve tried a few things, but I don’t get correct results.
This is exactly the solution I need as I looked all over UA-cam and couldn't find the right video that highlights all the components you included in this video. Thanks for doing that. My only thing is that I am trying to get the data from SharePoint (over 100k rows) and not dataverse and then piping the data to Cosmos I get the concept would be the same just a different data source. Thanks much.
Just stumbled across this and subscribed to your channel. Great video on the topic; I have been searching for how to handle large dataset in Automate and this will do it for me! Thanks!!!
Hello Steve, I am in the process of attempting to utilize excel to create / update records in a SharePoint List. I currently have mu flow functioning the way i want. However, I am stuck at that 5000 limit. My data will be upwards of 200 times that. I know this will move slow but will work for what I would need. How do I implement this on of these methods into it? I understand that I will have to turn my flow into a solution. Just not sure what that will look like with my needs. Currently I am utilized a Scheduled Flow--->Get files (properties only)--->Apply to each->Update file properties->List rows present in a table->Apply to each-Get items-Conditions->Delay->Update file Properties The video I watched to do this is Reza Dorrani - Add & Update Excel Data to SharePoint List using Power Automate | Excel Import using flow My flow will be almost Identical to his. The only difference is I did not need to do all the extra steps to select choices. My data is very clean and singular text.
Concurrency is actually defaulted to 20, not 1, even with control off. To do truly synchronous (1 by 1), you need to turn concurrency control ON, and slide the scaler down to 1.
Sorry I'm not sure I fully understand your question. I parse the OData response to get the next page URL. If that value is not empty, meaning there is a URL present, that means you can request the next "batch" of data. Hopefully that helps answer your question.
Brilliant...just what I need! Thank you...😊 ...I can't find the link to the documentation / example. Has that gone ? Could you help me to get to it somehow? Desperate to use it...😊
Hey Steve! I want to export data from Power BI to excel using Power automation, But it only updates 1000 rows after a run. Can this method also be applied when the source file is from Power BI?
Great question. If the Power BI api let’s you do paging then yes you can use this same technique. I’m not aware off the top of my head if the Power BI export api supports this but the pattern in Power Automate supports this.
I believe this same pattern with work with SharePoint. I haven’t tested it myself, but my guess is it would. Also if you are doing this against SharePoint only you can used the seeded version of Power Automate.
@Ahmad Syakir Shahruddin For the Excel for Business connector. On the action, click on the 3 dots > Settings. Enable pagination and set the threshold to higher than 5,000. This will automatically do the paging for you and return a larger dataset size.
Hi Steve, Thanks for your video! While your flow works on my data, there's still something I am not totally clear about: when discussing the "Do until" loop at 10m02s, you said "as long as the next URL property is not null", then we loop. Which makes sense for a "Do while" loop but not in this case since it's a "Do until". At least I would run the loop UNTIL the next link variable becomes null, meaning that the NextLink property does not exist anymore. Why does it work in that case?
I want import more than 1lakh records to dataverse from csv file in SharePoint using dataflow instead of power automate. Can you guide me possible ways please?
Hi Steve. Great stuff. Wanted to know if you would be helpful in knowing how would this work with sharepoint. I have a flow that creates/updates sharepoint item sourcing out from an excel file present on the sharepoint. The data ofcourse will be going beyond 5000 rows in excel and evidently at some point I have to worry about paging. Could you please create a video or help so that creating/updating sharepoint list items for newly added rows/existing rows (no matter the no of rows to be listed on automate) are possible. P.S. an O365 user.
How to import data from sharepoint or one drive into Dataverse, or we can directly upload excel file from canvas App and read data and insert into Dataverse as a table. please make one vidio
Dataflows are how you can do what you want to do here. docs.microsoft.com/en-us/power-query/dataflows/overview-dataflows-across-power-platform-dynamics-365
I'm outputting 25 fields from an entity, using fetchxml, parsing the paging cookie and doing it very similar...however; I'm trying to write these 25 fields over 10k rows (limit if 5k per run)....it's run for 24 minutes now already (ugh, so long). It seems like there should be a much much faster way to get 10k rows into excel. IE, one should not even have to apply to each, instead just get the rows and add them all to a table in one shot. If you have any ideas on how to do that. I guess I could maybe append a CSV table over and over, and then write all the contents in one push???
@@SteveWinward actually, just using an array variable to loop over and append to, then put that into a csv table action, then create file for SharePoint with it's output as input took about 9 minutes. Much more acceptable for now.
Hi, thanks for this content, however, I experienced some issues. 1. Getting all data from contact entity gave this error: "HTTP request failed as there is an error: 'Cannot write more bytes to the buffer than the configured maximum buffer size: 104857600. " 2. I retrieved some fields in the select columns and error in No.1 above resolved but "NextLink" was empty. 3. I also used fetchXML (which has a limit of 5k rows) and NextLink was still empty. Any help is appreciated. I have a 12k contact DB to work on.
If you are getting the maximum buffer size error, you probably need to create a flow like I do here (ua-cam.com/video/5NtzcfmSGes/v-deo.html) which is the custom pagination example. If you go to the settings of the action and enable pagination, that step still has a max limit that you are running into. If you loop over the records 5,000 records at a time, I don't think you will hit the max buffer size error. I did not test my example with a fetchXML query specified.
No matter what I try. Import keeps failing with message saying "something went wrong try again later". I also tried to remove the folder "Workflows" and created a new ZIP with the 3 Json's and without that subfolder. But nothing works. Import of the zip is impossible. 🤷♂️ Any idea?
@@SteveWinward Well, many thanks for your quick response en additional info. I did not follow the correct procedure. However, even if I do this import is not possible. Anyway, don't bother. Your video was helpfull enough to get me on the right track. I was struggling (for almost a week now) to get pagination solved for MS Planner (through MS Graph). There were several obstacles I had to overcome and partly because of the limited information in the event of an error, it was very difficult to discover what and where exactly went wrong. After getting the permissions right (Admin Concent on Microsoft Graph ApiConnectors) pagination (handling NextLink) remained. I now have a working model. Many thanks for your video and support. You made one of the most complete and intelligible explanations.
Hey Travis. I haven’t tested this with 100,000 records. My intent was to show how to use paging if you have datasets right over the 5,000 or 10,000 mark that will get truncated by default. If you have records on the order of 100,000 items you need to regularly work with you may want to find another approach outside of Power Automate. Dataflows might be a better option or even looking at something in Azure to process the records.
HI Steve thanks for sharing the useful information. I would like to ask about as I have under apply to each action is export to file for paginated reports and it is taking quite time and passing few parameters value from dataverse then calling send email action.kindly suggest to optimize the flow performance issue .
Thanks for sharing this with us. I was thinking that the limit was the number of rows per page, but now I completely understand this feature. Thanks again!
At timestamp 10:05 you mention you will show how you are parsing out to get the skip token value, but you don’t show it. For the action ‘update skip token variable’, what expression are you using for the value field? I’ve tried a few things, but I don’t get correct results.
This is exactly the solution I need as I looked all over UA-cam and couldn't find the right video that highlights all the components you included in this video. Thanks for doing that. My only thing is that I am trying to get the data from SharePoint (over 100k rows) and not dataverse and then piping the data to Cosmos I get the concept would be the same just a different data source. Thanks much.
Thank you Steve!! Pagination and Concurrency are saving me so much time :) Wonderful video.
Excellent. Glad you enjoyed it!
Just stumbled across this and subscribed to your channel. Great video on the topic; I have been searching for how to handle large dataset in Automate and this will do it for me! Thanks!!!
Best video ive come across
Thank you. Glad you enjoyed it!
Gracias totales me sirvió muchísimo esta información. thx
Hi Steve
I have power apps for dynamics 365 and power apps guides
In my licence list
Does it work with this licence?
Hello Steve, I am in the process of attempting to utilize excel to create / update records in a SharePoint List. I currently have mu flow functioning the way i want. However, I am stuck at that 5000 limit. My data will be upwards of 200 times that. I know this will move slow but will work for what I would need.
How do I implement this on of these methods into it? I understand that I will have to turn my flow into a solution. Just not sure what that will look like with my needs.
Currently I am utilized a Scheduled Flow--->Get files (properties only)--->Apply to each->Update file properties->List rows present in a table->Apply to each-Get items-Conditions->Delay->Update file Properties
The video I watched to do this is
Reza Dorrani - Add & Update Excel Data to SharePoint List using Power Automate | Excel Import using flow
My flow will be almost Identical to his. The only difference is I did not need to do all the extra steps to select choices. My data is very clean and singular text.
Awesome tutorial.
Very clear video. More please!
Thank you!
Would you create a similar video showing how to use Pagination with Flow pulling from an API/Web source?
Concurrency is actually defaulted to 20, not 1, even with control off. To do truly synchronous (1 by 1), you need to turn concurrency control ON, and slide the scaler down to 1.
Hi Steve
Very well explained.
One question:
After the do until loop, with what value you have updated the skip token variable.
Thanks in advance!
Sorry I'm not sure I fully understand your question. I parse the OData response to get the next page URL. If that value is not empty, meaning there is a URL present, that means you can request the next "batch" of data. Hopefully that helps answer your question.
@@SteveWinward Thanks Steve👍
Brilliant...just what I need! Thank you...😊
...I can't find the link to the documentation / example. Has that gone ? Could you help me to get to it somehow? Desperate to use it...😊
Here's the link on GitHub,
github.com/SteveWinward/PowerApps/blob/master/WriteUps/PowerAutomatePagination.md
you should have shown how you are parsing skip token as well it will be valuble
Great work ! Thanks
Thank you, glad you liked it!
Hi! This is an awesome video! Thanks so much! Does this only work with DataVerse data, or can I use it with SharePoint data?
It will work against any api that uses OData. SharePoint apis support OData so it should work there too.
@@SteveWinward Awesome! Thanks again!
Hey Steve! I want to export data from Power BI to excel using Power automation, But it only updates 1000 rows after a run. Can this method also be applied when the source file is from Power BI?
Great question. If the Power BI api let’s you do paging then yes you can use this same technique. I’m not aware off the top of my head if the Power BI export api supports this but the pattern in Power Automate supports this.
Can I use pagination to get pages of 100 records each? And if so, how do I configure that?
Hi
Good video. But how can we do if are using SQL connector to extract more than 5000 records. ?
Instead of Dataverse, will it work for the Sharepoint list? Also, will it work for the seated version of power automate?
I believe this same pattern with work with SharePoint. I haven’t tested it myself, but my guess is it would. Also if you are doing this against SharePoint only you can used the seeded version of Power Automate.
Thank you Steve. Will this pagination apply for the excel as a data source?
I believe it should work with the Excel connector. I have not tried that one myself, but I imagine the API has the same pagination capabilities.
@Ahmad Syakir Shahruddin For the Excel for Business connector. On the action, click on the 3 dots > Settings. Enable pagination and set the threshold to higher than 5,000. This will automatically do the paging for you and return a larger dataset size.
Hi Steve,
Thanks for your video!
While your flow works on my data, there's still something I am not totally clear about: when discussing the "Do until" loop at 10m02s, you said "as long as the next URL property is not null", then we loop. Which makes sense for a "Do while" loop but not in this case since it's a "Do until". At least I would run the loop UNTIL the next link variable becomes null, meaning that the NextLink property does not exist anymore. Why does it work in that case?
Probably, if Next link property not found, it indicates as NULL
I want import more than 1lakh records to dataverse from csv file in SharePoint using dataflow instead of power automate. Can you guide me possible ways please?
Hi Steve. Great stuff. Wanted to know if you would be helpful in knowing how would this work with sharepoint.
I have a flow that creates/updates sharepoint item sourcing out from an excel file present on the sharepoint. The data ofcourse will be going beyond 5000 rows in excel and evidently at some point I have to worry about paging. Could you please create a video or help so that creating/updating sharepoint list items for newly added rows/existing rows (no matter the no of rows to be listed on automate) are possible. P.S. an O365 user.
Hi, but anything more than 5000 is for premium users, right. I have basic plan which is 5000 for Pagination limit
Can you provide the expression in initialize value?
Nice information
Thank you!
How to import data from sharepoint or one drive into Dataverse, or we can directly upload excel file from canvas App and read data and insert into Dataverse as a table. please make one vidio
Dataflows are how you can do what you want to do here.
docs.microsoft.com/en-us/power-query/dataflows/overview-dataflows-across-power-platform-dynamics-365
Should elaborate on skip token more
I'm outputting 25 fields from an entity, using fetchxml, parsing the paging cookie and doing it very similar...however; I'm trying to write these 25 fields over 10k rows (limit if 5k per run)....it's run for 24 minutes now already (ugh, so long). It seems like there should be a much much faster way to get 10k rows into excel. IE, one should not even have to apply to each, instead just get the rows and add them all to a table in one shot. If you have any ideas on how to do that. I guess I could maybe append a CSV table over and over, and then write all the contents in one push???
Another idea is to use Azure Synapse and setup a data pipeline to convert Dataverse records to a csv file.
@@SteveWinward actually, just using an array variable to loop over and append to, then put that into a csv table action, then create file for SharePoint with it's output as input took about 9 minutes. Much more acceptable for now.
@@klemetsrudj excellent. Thanks for sharing your solution.
Hi, thanks for this content, however, I experienced some issues.
1. Getting all data from contact entity gave this error: "HTTP request failed as there is an error: 'Cannot write more bytes to the buffer than the configured maximum buffer size: 104857600. "
2. I retrieved some fields in the select columns and error in No.1 above resolved but "NextLink" was empty.
3. I also used fetchXML (which has a limit of 5k rows) and NextLink was still empty.
Any help is appreciated. I have a 12k contact DB to work on.
If you are getting the maximum buffer size error, you probably need to create a flow like I do here (ua-cam.com/video/5NtzcfmSGes/v-deo.html) which is the custom pagination example. If you go to the settings of the action and enable pagination, that step still has a max limit that you are running into. If you loop over the records 5,000 records at a time, I don't think you will hit the max buffer size error. I did not test my example with a fetchXML query specified.
what about list sharepoint ?
No matter what I try.
Import keeps failing with message saying "something went wrong try again later". I also tried to remove the folder "Workflows" and created a new ZIP with the 3 Json's and without that subfolder.
But nothing works. Import of the zip is impossible. 🤷♂️ Any idea?
Are you following these steps to import a solution file?
docs.microsoft.com/en-us/powerapps/maker/data-platform/import-update-export-solutions
I added some more detailed steps to get the sample installed
github.com/SteveWinward/PowerApps/blob/master/WriteUps/PowerAutomatePagination.md
@@SteveWinward Well, many thanks for your quick response en additional info. I did not follow the correct procedure. However, even if I do this import is not possible.
Anyway, don't bother. Your video was helpfull enough to get me on the right track. I was struggling (for almost a week now) to get pagination solved for MS Planner (through MS Graph). There were several obstacles I had to overcome and partly because of the limited information in the event of an error, it was very difficult to discover what and where exactly went wrong.
After getting the permissions right (Admin Concent on Microsoft Graph ApiConnectors) pagination (handling NextLink) remained. I now have a working model. Many thanks for your video and support. You made one of the most complete and intelligible explanations.
@@waltertiggeloven I'm glad you found this video helpful!
what is the length expression.
Will this work for over 100,000 records?
Hey Travis. I haven’t tested this with 100,000 records. My intent was to show how to use paging if you have datasets right over the 5,000 or 10,000 mark that will get truncated by default. If you have records on the order of 100,000 items you need to regularly work with you may want to find another approach outside of Power Automate. Dataflows might be a better option or even looking at something in Azure to process the records.
@@SteveWinward Thanks! I will look into Power BI Dataflows.
How to deal with 15000 records? thanks
Did you find a solution to this?
I have to do this for 2.5million records
HI Steve thanks for sharing the useful information. I would like to ask about as I have under apply to each action is export to file for paginated reports and it is taking quite time and passing few parameters value from dataverse then calling send email action.kindly suggest to optimize the flow performance issue .