Excellent demonstration. This solution works perfectly for me. I did a little change to eliminate the repeated headers. I initiated another ARRAY variable before the loop and with the compose function converted the query output into a JSON array. Then inside the loop I’ve joined the array from each loop run within set variable action
What a coincidence! I watched your video on this only one or two days ago and commented about this. I spent an entire day trying different solutions, but I couldn't seem to figure it out. The timing of this video was absolute perfection! Thank you so much!
For nulls and Keeping column order : Use a SELECT action to map the column names to the data values - this takes care of the nulls and the column order ... I know - you have to map all the columns to the incoming powerBI values. This worked for me with over 100 columns. @Curbal Thank you for this!!!!
I'm a bit late to the party but you can take care of the repeating headers, headers in wrong order, and make this process even faster by using JSON array format until the end. You are converting the JSON array into CSV string format in each loop + remove headers + append string. But Power Automate is made to work nicely with JSON instead of string format. All you have to do is use "Append to array variable" instead of "Append to string variable". Do not convert to CSV in each loop. Let the Convert to CSV function take care of headers at the end. If you work in JSON array format until the end, it will take care of all these problems. This reduces so much headache that everyone is trying to find a workaround here.
Hi, Where can I get the previous video wherein you have created the Power Automate Flow? This video only shows a section. Also if you could share the formula you used, that'll help us. Thanks
I share a embed link with anyone but when I made any changes in the report and publishing it, do we need to share different link or existing link will update the report ?
I am not combining files, but I need to remove the header of the CSV file that I am creating so that I only have the data. This is due to needing to upload the file into a system that does not accept the headers. Is there a formula that can be shared to accomplish this? Below is a list of the headers in my file. ConcurProjectsTable[*ListName],ConcurProjectsTable[*ListCategoryName],ConcurProjectsTable[ProjectID],ConcurProjectsTable[*Level02Code],ConcurProjectsTable[*Level03Code],ConcurProjectsTable[*Level04Code],ConcurProjectsTable[*Level05Code],ConcurProjectsTable[*Level06Code],ConcurProjectsTable[*Level07Code],ConcurProjectsTable[*Level08Code],ConcurProjectsTable[*Level09Code],ConcurProjectsTable[*Level10Code],ConcurProjectsTable[ConcurValue],ConcurProjectsTable[*StartDate],ConcurProjectsTable[*EndDate],ConcurProjectsTable[*DeleteListItem]
Hello! After watching your video I am able to loop through datasets with proper index. However, how do we manage the datasets with no index? Ie. I have case number (ie. 202301771011), and when I modify your DAX it just gave up while throwing errors. How would we proceed with non-indexed data?
This does not work 😭 It assumes the columns are in the same order in every CSV file but *this assumption is false*. In fact, in my case where my data is sparse (lots of null values), some columns are omitted entirely from some of the CSV files. We have switched to using a Power BI Paginated Report to export the full dataset in one go. For the solution in this video to work the CSV creation would need to be deferred until the end when all the result sets have been obtained. I can't figure out how to do this. I tried to append the result sets to an array variable but the result set is already an array and you cannot have an array of arrays unfortunately. I tried iterating through the result set appending each object to an array variable but it was too slow to be a workable solution.
I have the same issue. Data is all misaligned due to blanks in my dataset and I cannot figure out how to fix it. Would love some advise from someone smarter than me
Use a SELECT action to map the column names to the data values - this takes care of the nulls and the column order ... I know - you have to map all the columns to the incoming powerBI values. This worked for me with over 100 columns.
Excellent demonstration. This solution works perfectly for me. I did a little change to eliminate the repeated headers. I initiated another ARRAY variable before the loop and with the compose function converted the query output into a JSON array. Then inside the loop I’ve joined the array from each loop run within set variable action
What a coincidence! I watched your video on this only one or two days ago and commented about this. I spent an entire day trying different solutions, but I couldn't seem to figure it out. The timing of this video was absolute perfection! Thank you so much!
For nulls and Keeping column order : Use a SELECT action to map the column names to the data values - this takes care of the nulls and the column order ... I know - you have to map all the columns to the incoming powerBI values. This worked for me with over 100 columns. @Curbal Thank you for this!!!!
Thanks! 😃
Where should I use the SELECT function?
I'm a bit late to the party but you can take care of the repeating headers, headers in wrong order, and make this process even faster by using JSON array format until the end. You are converting the JSON array into CSV string format in each loop + remove headers + append string. But Power Automate is made to work nicely with JSON instead of string format. All you have to do is use "Append to array variable" instead of "Append to string variable". Do not convert to CSV in each loop. Let the Convert to CSV function take care of headers at the end. If you work in JSON array format until the end, it will take care of all these problems. This reduces so much headache that everyone is trying to find a workaround here.
Nice! Thanks :)
Worked very well. Thank you.
I tired this solution, it works but column header and its value differs after a loop
Hi, Where can I get the previous video wherein you have created the Power Automate Flow? This video only shows a section. Also if you could share the formula you used, that'll help us. Thanks
Thanks for the demo.
How can this be done for multiple tables in one file simultaneously?
I share a embed link with anyone but when I made any changes in the report and publishing it, do we need to share different link or existing link will update the report ?
Thank you Ruth...
You are a star...
❤️
I am not combining files, but I need to remove the header of the CSV file that I am creating so that I only have the data. This is due to needing to upload the file into a system that does not accept the headers. Is there a formula that can be shared to accomplish this? Below is a list of the headers in my file.
ConcurProjectsTable[*ListName],ConcurProjectsTable[*ListCategoryName],ConcurProjectsTable[ProjectID],ConcurProjectsTable[*Level02Code],ConcurProjectsTable[*Level03Code],ConcurProjectsTable[*Level04Code],ConcurProjectsTable[*Level05Code],ConcurProjectsTable[*Level06Code],ConcurProjectsTable[*Level07Code],ConcurProjectsTable[*Level08Code],ConcurProjectsTable[*Level09Code],ConcurProjectsTable[*Level10Code],ConcurProjectsTable[ConcurValue],ConcurProjectsTable[*StartDate],ConcurProjectsTable[*EndDate],ConcurProjectsTable[*DeleteListItem]
Do you need power bi capacity to run queries against dataset? Is PPU enough?
It is :) and so is Pro
@@CurbalEN Thanks!
Hello! After watching your video I am able to loop through datasets with proper index. However, how do we manage the datasets with no index? Ie. I have case number (ie. 202301771011), and when I modify your DAX it just gave up while throwing errors. How would we proceed with non-indexed data?
I need to do this video!
@@CurbalEN request you to do that video as well, it helps
This does not work 😭 It assumes the columns are in the same order in every CSV file but *this assumption is false*.
In fact, in my case where my data is sparse (lots of null values), some columns are omitted entirely from some of the CSV files.
We have switched to using a Power BI Paginated Report to export the full dataset in one go.
For the solution in this video to work the CSV creation would need to be deferred until the end when all the result sets have been obtained. I can't figure out how to do this. I tried to append the result sets to an array variable but the result set is already an array and you cannot have an array of arrays unfortunately.
I tried iterating through the result set appending each object to an array variable but it was too slow to be a workable solution.
I have the same issue. Data is all misaligned due to blanks in my dataset and I cannot figure out how to fix it. Would love some advise from someone smarter than me
Same issue here... I am trying to find a way too@@stevengernon1153
You likely have null values in 1 or more of your fields.
Use a SELECT action to map the column names to the data values - this takes care of the nulls and the column order ... I know - you have to map all the columns to the incoming powerBI values. This worked for me with over 100 columns.
@@anistplenitudmampuru8596 Explain Please