Brilliant! I wish there was something better than like and save in UA-cam - I need flashing lights a d pointing fingers - I know I need to come back to this video often! ➡️ ➡️ 📌📌📌⬅️ ⬅️❗
Thank you! Until today I hadn't realised that one could bunny-hop references to earlier points in the applied steps list and effectively get multiple bites at the data. So useful.
Hey Wyn, I used 1 line of code which emancipated me from recursive day/nightmares. Thank you again for sharing your knowledge and giving everyone a very good foundation. Kind Regards, Bhavik
Grateful does not even begin to express how I feel! I've been seeking this solution for longer than I care to admit. THANK YOU! I receive multiple reports that have the same columns but in a different order and this was a perfect append solution!
Hi, glad I could help. if the columns have the same names then a normal consolidation from folder process should work fine. The order shouldn’t matter.
I found your UA-cam channel after listening to your podcasts, so excellent that you’re sharing all this info in a great tutorial. I’m new to Power products, only started learning a month ago, but using everything in anger at work, replacing all my vba macros!!
@@AccessAnalytic my only problem was the pertinent values of most columns that's been combined were replaced with null. Do you have a video that can restore the missing values? Thanks again.
Hi it won’t replace with nulls, if the columns have slightly different names they will show up side by side so you would need to scroll down to see the values ( e.g. the new columns are offset to the right AND the data appears on NEW rows )
Thank you. This was very helpful. Just to add, I found that I had to wrap this in a List.Buffer function as my query was taking too long to run based on the number of different columns I had. This solved the issue and it ran much faster.
Pure gold dust. Love it! Please dont stop sharing your content. I appreciate your succinct and clean approach. Diolch from Newcastle 🤓 P.s. this method can also be used to dynamically rename column headers AND dynamically format datatypes. PowerQuery is the gift that kerps on giving.
Wyn, I cannot thank you enough for sharing this fantastic video and your incredible knowledge of Microsoft Excel, Power Query, Power BI, and DAX! Your expertise and passion for these tools truly shine through in your content. I've had the pleasure of attending a couple of your workshops, and I must say, your insights have been game-changing for me. Your dedication to empowering others with these skills is genuinely inspiring, and I am incredibly grateful for the opportunity to learn from you. Keep up the amazing work, and I eagerly await your next masterpiece!
Hi Wyn. Great new trick for the Power Query tool bag! Thanks for showing the steps and sharing the sample files to follow along. Much appreciated! I'll definitely bookmark this for future reference. Thanks for sharing and thumbs up!!
After DAYS of looking for help, this is the first video that's gotten me some. Unfortunately, when I get to the end, everything's good, except for one last column that shows null. Then, when I load, it shows what the null columns SHOULD'VE been, but they're repeats. Still the closest I've gotten to what I need though, so I'm super grateful.
Amazing video as usually! Today I was struggling with X folders and files there, getting proper structure and so on...seeing this, I will be smarter tomorrow with all the columns I need =)
Wow.....this is so brilliant, been searching since only to come across. This is so helpful Although the codes are not gonna be easy to remember but I can always refer to this video. Thank you 👍
yes ! Amazing... Thank you so much for this. I was struggling doing some pricing comparison with source data with different formats and columns names. This is awesome... Thank you ! :)
This video deserves a Nobel Prize, Wyn. 😊Thank you so much! Would you recommend the same method when a new column is added but not appearing in the query after refresh? I have a query connected to a sharepoint folder where there're 100 files with the same columns. Recently I had to add one new column starting file 101, I had a hard time finding that newly added column in the query, the refresh does not pick up the new column. This trick saved me.
Really nice trick Wyn. Thanks for sharing! When I first saw the name I immediately thought of an Unpivot Other Columns based on the first file … but this is really cool too 😜
Yep unpivotting the transform file would be my first choice approach. A recent scenario needed the data loaded as columns as part of another process so unpvotting wasn’t an option
Honestly, for me the step before the trick (the column driven drill down) was the missing link. The trick line itself wasn’t that bad. Recently I had a similar problem that I solved, where I didn’t need List.Union but rather List.Combine. Getting the column names, filtering them, replacing the headers with the actual columns, combining them into a single column. Repeat for different filter values. Combining the results into columns of a table and move on from there… This here is a great lesson, though, because it teaches us to leave the autopilot and start thinking for ourselves. PS: I never use the Files From Folder input technique: it creates too many queries IMO. I prefer putting the path of the folder in a table, load that to PQ and work from there, doing every step myself. Any thoughts on that? Do you see any disadvantages for that compared to the std. interface approach? Thanks.
When consolidating from SharePoint I go via the transform option, make a master Folder query, reference that then do the the combine, this creates a sample file and consolidation both linked to the master folder query. Simpler to then change folders in future. I’m a fan of the transform sample file element generated by the UI
Instead of "combine and transform", i go to "transform data"=> get table from binary=> expand table. This gets all the unique columns from all the files.
That will not work here. Try it. Once you get table from binary, you will see that the Tables only have Sheet. No Table inside. EDIT: Once I added the Custom column TABLES beside BINARY, I expanded it. There I got another DATA column (with TABLES under it) that could be expanded again. I removed all other columns except this DATA column. Then CLOSE&APPLY. THAT WORKED BEAUTIFULLY! Are my steps correct? Is this exactly what you do?
Thank you for the video. This solved most of my issues, but I had an issue with loading data because the Sheet Names were different. Found a workaround elsewhere by changing the formula of Transform Sample File>Navigation to =Source{0}[Data]. I get what it's doing, but now I think I'd like to know more about what the 'Helper Queries' are doing.
Hi, you might find this video useful : Combining Multiple Files from a folder using Power Query in Excel or Power BI ua-cam.com/video/nPlrQUbEn4o/v-deo.html
I'll probably never do it the old way again. What's neat and simultaneously annoying is that this M model can be pasted directly into PowerBI & likely set to automatically refresh if it's a SharePoint folder. You bypass SSIS/ADF altogether and have PBI Services do the work for just the model you choose to refresh. Two decades of work say that I *have* to bring the data into SQL and do magic things, but you really don't at this point. That era of needing a database to do smart models for business users is really & truly over, but the nostalgia is eternal.
No downsides as such, appended table column names need to match exactly ( including upper / lower case match ) if you want the columns to stack on top of each other. Otherwise new columns are created
@@AccessAnalytic Great! Thank you! So the data from same name columns will be appended on top of each other. Any additional or different name column will be added as a new column. These new column will have null values corresponding to the data set where these do not exist.
Great solution if starting a new query but not so great if the query has been built already since removing all subsequent steps and data reverts the report back to almost source-level. Secondarily, though I've found a work-around to ensure the most recent Test CSV is included in the query folder, the 2 new test fields I created within the test CSV still are not populating even after performing [transform data] steps. Ugh, so frustrating.
Yeah retrofitting is always hard, I normally start to create a new query, get it to the same starting point as the old query then copy the code across in the advanced editor
@@AccessAnalytic Not a bad idea and an approach I may ultimately take. Problem is that I have approximately 20 separate monthly data sources, at least 50% of which contain 50K or more rows of data, so updating each of those files with this parameter won't be an extremely fast process. Since I went through a similar exercise recently, focusing on back-end processes with very little improvement to the visuals during this time, I'll have to be more deliberate since clients typically want to see front-end improvements and sometimes don't understand the time and focus spent on back-end optimization. Thanks so much for the help here. Love your tutorials!
Thanks! Just what I needed. Also got me thinking: if I knew that my files would only ever have different permutations of columns "Measurement1" to "Measurement6", could I create a dummy file in the folder with just these headings and no data then use it as the sample file? It wouldn't dynamically accommodate further columns but, with over 200 Excel files to combine and no new columns for the foreseeable future, it might work in my situation.
OMG, this is amazing, Wyn; thank you so much for this clever post (and of course, a huge thanks to Gil!) One problem though - when I use the exercise files, everything works perfectly. However, I have a problem right away when I use my own files where my column headings are dates and my rows contain names. The combined files are loaded into PQ with column headers as follows: Column1, Column2, etc., as if PQ didn't know that my dates were intended as column headers. Am I doing something wrong or is there a (hopefully, easy) solution to this? Thank you so much.
Thanks, Yeah if your headings are numerical / date it won’t auto promote headers. So you’ll need to click the Use First row as header button early in the transform sample file step
@@AccessAnalytic What I did was create a dummy file with 1 dummy row and 1 dummy col where the header is simply 'dummy', i.e. non-numeric. When I include that dummy file into the folder, then make that the "First file" for the combine, everything works perfectly! Somehow, the dummy file was ignored altogether saving me the effort to remove it myself. What a wonderful service you've provided, Wyn. You are the real deal - THANK YOU SO MUCH.
Brilliant, thanks a lot for the video, just one simple question, what if i need data of a column to be considered just one time (columns to be considered all but one time) Hope to hear from you Thanks another time
@AccessAnalytic no problem Here is an example: In my work i receive each week an Excel file that contains some data of the current week and four previous weeks (for example this week I will receive a column named wk-32 and other columns of four pevious weeks : wk-31 wk-30 wk-29 wk-28) Next week I Will receive (wk-33 wk-32 wk-31 wk-30 wk-29) I use the method you shared it was vers helpful but I need the data of one column to be considered Just one time, for example i have sum of column wk-29 is 5000 dollars if the other Excel file contains aalso wk-29 I need to have only 5000 dollars not 10000. I home I could explain more, Thanks a lot,
@@mehdiabid8324 - I would use the Transform Sample file step to Unpivot Other Columns so that the individual week columns are a single column rather than separate ones. You can then apply a remove duplicates. ua-cam.com/video/ESap6ptV8fI/v-deo.htmlsi=QtL9nMz4FoGXfhHs
Hi Whyn ,thanks for sharing your knowledge. In case the headers started in different positions (row 3, 4, 7) in each sheets and rows had to be canceled to promote as headers, what would the process be like, could you help me?
You'll need to apply some logic in the Transform Sample file step (that logic will be applied to each file ). Maybe you can apply a filter to remove the required rows, rather than specifying a specific count
So do you want to consolidate multiple tables from multiple sheets from multiple workbooks, or just multiple sheets from 1 workbook? If it's just one workbook then check this out ua-cam.com/video/n8_sA6NMlkA/v-deo.htmlsi=H83zOfBiRPGWuBzU
Great vdo. Could you tell me how to merge 2 tables with different column value but have the same ID? (But I wanna keep all the rows in the first table even though in the second table didn' t have any value). I tried using power query merge feature but the output always excluded the Null Value of the first table in the merged table. Thank in advance.
@@AccessAnalytic I have tried several times and the merged file is still the same (excludes all all rows with null value from the first table). I still dun understand it.
@@4141-i7o so here's an example table A has 1 5 6 9 table B has 5 and 9. With table A selected go to merge and merge in table B. Expand columns. Table A now still has all original records, plus the columns from Table B for records 5 and 9. What behaviour are you wanting using this example.
You do those transformations in the query called “Transform sample file” I go into details of the helper queries in this video: Combining Multiple Files from a folder using Power Query in Excel or Power BI ( ⚠️see description ) ua-cam.com/video/nPlrQUbEn4o/v-deo.html
Thanks. The files I have contain some cells filled with a colour that I would like to retain in the combined file (not via conditional formatting). Is there a way?
Wow! In my case, I would like to use this trick but my data has headers that need to be merged from two rows. Forexample, in the case on Measurement 1, Measurement 2, etc; suppose the data has another word like "Total weight" in a row just above "Measurement...." for all files. Any ideas on how this would work out? Thank you very much
It’s difficult to answer here but sounds like a situation where demoting the header, transpose, fill down and then merge the 2 headings . Then transpose again
Brilliant! I wish there was something better than like and save in UA-cam - I need flashing lights a d pointing fingers - I know I need to come back to this video often! ➡️ ➡️ 📌📌📌⬅️ ⬅️❗
😁
There is a save button hidden in the 3 dots. You can create a playlist and save the video in it. This video is added to my Excel playlist :)
That's my whole weekend's trouble shooting why my queries are not pulling correctly explained in 9min...Thank you Sir
You’re welcome. Thanks for taking the time to leave a kind comment
Amazing! I spent three hours researching until past midnight, and here you are with the simplest straight solution to the scenario :)
Glad I could help 😀
Thank you! Until today I hadn't realised that one could bunny-hop references to earlier points in the applied steps list and effectively get multiple bites at the data. So useful.
I remember being happily surprised on learning that technique too 😀
Hey Wyn,
I used 1 line of code which emancipated me from recursive day/nightmares. Thank you again for sharing your knowledge and giving everyone a very good foundation.
Kind Regards,
Bhavik
Awesome. You’re welcome Bhavik
Grateful does not even begin to express how I feel! I've been seeking this solution for longer than I care to admit. THANK YOU! I receive multiple reports that have the same columns but in a different order and this was a perfect append solution!
Hi, glad I could help. if the columns have the same names then a normal consolidation from folder process should work fine. The order shouldn’t matter.
I found your UA-cam channel after listening to your podcasts, so excellent that you’re sharing all this info in a great tutorial. I’m new to Power products, only started learning a month ago, but using everything in anger at work, replacing all my vba macros!!
Great to hear. Thanks for letting me know you’re finding things useful here
I can't believe i didn't find your channel before, what a pitty that i needed excel global summit to learn about the great wyn hopkins !
Hah, thanks Anthony, glad you found me!
I've been stuck with my assignment for weeks. This video is a life saver. Thank you so much.
No worries. Thanks for taking the time to leave a kind comment
@@AccessAnalytic my only problem was the pertinent values of most columns that's been combined were replaced with null. Do you have a video that can restore the missing values? Thanks again.
Hi it won’t replace with nulls, if the columns have slightly different names they will show up side by side so you would need to scroll down to see the values ( e.g. the new columns are offset to the right AND the data appears on NEW rows )
@@AccessAnalytic thank you very much!
Brilliant. Just what I need - not every day in the week, but pretty often. Mega thanks.
Glad to help, thanks for taking the time to leave a kind comment
Thank you. This was very helpful. Just to add, I found that I had to wrap this in a List.Buffer function as my query was taking too long to run based on the number of different columns I had. This solved the issue and it ran much faster.
Great, thanks for the "heads up"
Can you post the updated line of code here for me to refer others to, cheers!
For anyone who's interested, its something like HEADINGS = List.Buffer(... existing code). It's much faster. But I don't know if there's any drawback.
Nope , all good. Thanks for posting.
Gil Raviv is the mutt's nutts!!! His book and blog are a must for anyone using Power Query. Thank you for making the video!!!
No worries, check out my interview with Gil here ua-cam.com/video/07zOX5IYImI/v-deo.html
Pure gold dust. Love it! Please dont stop sharing your content. I appreciate your succinct and clean approach. Diolch from Newcastle 🤓
P.s. this method can also be used to dynamically rename column headers AND dynamically format datatypes.
PowerQuery is the gift that kerps on giving.
Diolch yn fawr Imran. Agreed, Power Query is a well of goodness
Could you expand on how you dynamically format the datatypes?
I’ve not seen a simple method to do that. You could try this datachant.com/2018/05/14/automatic-detection-of-column-types-in-powerquery/
subscribed. Finally got me to stop adding a custom column to find tables within workbooks.
Great!
You know I’m making a list of all your UA-cam videos rather than writing that down!
Wyn, I cannot thank you enough for sharing this fantastic video and your incredible knowledge of Microsoft Excel, Power Query, Power BI, and DAX! Your expertise and passion for these tools truly shine through in your content. I've had the pleasure of attending a couple of your workshops, and I must say, your insights have been game-changing for me. Your dedication to empowering others with these skills is genuinely inspiring, and I am incredibly grateful for the opportunity to learn from you. Keep up the amazing work, and I eagerly await your next masterpiece!
Thank you for the support Shagun. Greatly appreciated 🙏🏼
You save my day. You're a brilliant man who can share the simplest ways. I love you man!
Awesome 😊
Precious gem in my mini PQ formulas library! :)
Many thanks for this and for clear and simply explanations - great job!
Glad to help
Such a complicated issue solved with so much ease. Thanks a million !!
You’re welcome
Hi Wyn. Great new trick for the Power Query tool bag! Thanks for showing the steps and sharing the sample files to follow along. Much appreciated! I'll definitely bookmark this for future reference. Thanks for sharing and thumbs up!!
Thanks as always Wayne ⭐
After DAYS of looking for help, this is the first video that's gotten me some. Unfortunately, when I get to the end, everything's good, except for one last column that shows null. Then, when I load, it shows what the null columns SHOULD'VE been, but they're repeats. Still the closest I've gotten to what I need though, so I'm super grateful.
null columns may mean slightly different different spelling of column names ( spaces, uppercase etc )
This just saved me many hours of work. Thank you!
You’re welcome. Thanks for taking the time to leave a your comment
This seems like a good alternative for append tables and delete nulls.
Thanks for sharing. You got a subscriber.
Great
Thank you Wyn. This is another great technique to add to my ever-growing library of solutions!
No worries Shirley
Thanks Wyn, I have been trying to find this process for a long time. Legend mate
You're welcome Tony 😁
Amazing video as usually!
Today I was struggling with X folders and files there, getting proper structure and so on...seeing this, I will be smarter tomorrow with all the columns I need =)
Thank you. Glad to help and thank for letting me know.
Awesome....I have been looking for this solution for years. Thanks so very much
You’re welcome
Useful explanation, breakdown into steps is helpful and thanks for crediting Gil Raviv.
Thanks
Thank you soooo much, you saved my life with this technique! 😊
Glad to help.
Dear, Wyn! Thank you for the video! Great staff! Helped me a lot.
Thanks great to know
Wow.....this is so brilliant, been searching since only to come across. This is so helpful
Although the codes are not gonna be easy to remember but I can always refer to this video.
Thank you 👍
You’re welcome. I hardly remember any code these days!😄
Beautiful. I've a query with this problem. Now I think I can make it work and check some 700,000 records with some formulas.
Thanks!
Thanks for taking the time to leave a kind comment
yes ! Amazing... Thank you so much for this. I was struggling doing some pricing comparison with source data with different formats and columns names. This is awesome... Thank you ! :)
You're welcome
Wyn thanks for this golden video, I think this is the best technique I have seen on this issue.
Very kind of you to take the time to say so. Cheers!
Sparkly. One line of code, many problems solved.
The easiest way I’ve seen. Thank you!
You're welcome Mariusz
Learning from Brazil...
Thank you!
You're welcome
This video deserves a Nobel Prize, Wyn. 😊Thank you so much! Would you recommend the same method when a new column is added but not appearing in the query after refresh? I have a query connected to a sharepoint folder where there're 100 files with the same columns. Recently I had to add one new column starting file 101, I had a hard time finding that newly added column in the query, the refresh does not pick up the new column. This trick saved me.
Great to hear 😊. If it’s excel files then yes, if CSVs then maybe ( there can be an additional step to remove delimiter count in source step )
that was brilliant! thank so much, you saved me tons of hours with my payroll worksheet!
Glad to help. Thanks for taking the time to let me know
You are awesome Sir
Thanku and lots of love from India 🥳🥳🎁
😀 Thanks for taking the time to leave a kind comment
You’re great! It’s a bit confusing at first but I'm learning. Thanks.
Cheers ☺️
Thank you so much! All my columns are now visible
That’s great, thanks for letting me know it helped
This worked out great. Thanks so much for the walkthrough.
I appreciate you taking the time to let me know you found it useful
Excellent. A great use of M functions 💯👍
Thank you
Really nice trick Wyn. Thanks for sharing!
When I first saw the name I immediately thought of an Unpivot Other Columns based on the first file … but this is really cool too 😜
Yep unpivotting the transform file would be my first choice approach. A recent scenario needed the data loaded as columns as part of another process so unpvotting wasn’t an option
This is just what I needed. You are a life saver. Just subscribed to your channel! Appreciate it!
Glad to help Gabby.
SUPER, thanks Wyn. I didn't know before.
Glad to pass on the tips Norbert
That's a life saver. Amazing content as usual.👍
Happy to help!
This is freaking brilliant! Thanks! Life's hard enough, this HELPS me a LOT! LOL! Thanks again!
Glad to help
Very useful! I will keep the formula for the future reference
Honestly, for me the step before the trick (the column driven drill down) was the missing link.
The trick line itself wasn’t that bad.
Recently I had a similar problem that I solved, where I didn’t need List.Union but rather List.Combine.
Getting the column names, filtering them, replacing the headers with the actual columns, combining them into a single column.
Repeat for different filter values. Combining the results into columns of a table and move on from there…
This here is a great lesson, though, because it teaches us to leave the autopilot and start thinking for ourselves.
PS: I never use the Files From Folder input technique: it creates too many queries IMO. I prefer putting the path of the folder in a table, load that to PQ and work from there, doing every step myself. Any thoughts on that? Do you see any disadvantages for that compared to the std. interface approach? Thanks.
When consolidating from SharePoint I go via the transform option, make a master Folder query, reference that then do the the combine, this creates a sample file and consolidation both linked to the master folder query. Simpler to then change folders in future. I’m a fan of the transform sample file element generated by the UI
Instead of "combine and transform", i go to "transform data"=> get table from binary=> expand table. This gets all the unique columns from all the files.
True ! ❤️
That will not work here. Try it. Once you get table from binary, you will see that the Tables only have Sheet. No Table inside. EDIT: Once I added the Custom column TABLES beside BINARY, I expanded it. There I got another DATA column (with TABLES under it) that could be expanded again. I removed all other columns except this DATA column. Then CLOSE&APPLY. THAT WORKED BEAUTIFULLY! Are my steps correct? Is this exactly what you do?
Very Good. Cheers from Brazil
Thanks Felipe
Are you a wizard? Because this is MAGIC!! Thank YOU!
Cheers Darren, check out my podcast / UA-cam series Power Query Magic 😄 ua-cam.com/users/PowerQueryMagic
Brilliant. Just in time. Thanks for sharing.
Thanks for commenting 😁
This is what I'm looking for.. tq for sharing..
You're welcome
Thank you for sharing this great method... that is so helpful
Glad to help Zuhair
You just save me!!!! Thank you so much for this video ahhhh God bless you !
You're very welcome, thanks for letting me know it was useful
Thank you Wyn!!!! Just what I needed!
😄 Bill ? 😊
@@AccessAnalytic Sorry!!! 🤣🤣🤣
😆
Great that's what I am looking for. Thumbs up
Excellent
Thank you for the video. This solved most of my issues, but I had an issue with loading data because the Sheet Names were different. Found a workaround elsewhere by changing the formula of Transform Sample File>Navigation to =Source{0}[Data]. I get what it's doing, but now I think I'd like to know more about what the 'Helper Queries' are doing.
Hi, you might find this video useful : Combining Multiple Files from a folder using Power Query in Excel or Power BI
ua-cam.com/video/nPlrQUbEn4o/v-deo.html
Great Video, really very helpful. Best Wishes
Thanks Vishal, very kind
This is a great video, suppose next month some more headers are added or some are renamed still this formula can handle situation?
Absolutely
That's the man I was looking for.
Glad to help
Amazing. I didn't know that was possible. TVM.
You’re welcome
Thank you!! i learnt something new ....
Great, glad to help
Super. Thats what I needed..
Thank you
You’re welcome
Table.Combine does the same thing without writing code. but this is good learning knowledge
Yep you do then lose the file names, but if you don't care about those then that's a good option.
Beautiful indeed, thanks Wyn!
Cheers Fernando
I'll probably never do it the old way again. What's neat and simultaneously annoying is that this M model can be pasted directly into PowerBI & likely set to automatically refresh if it's a SharePoint folder. You bypass SSIS/ADF altogether and have PBI Services do the work for just the model you choose to refresh. Two decades of work say that I *have* to bring the data into SQL and do magic things, but you really don't at this point. That era of needing a database to do smart models for business users is really & truly over, but the nostalgia is eternal.
Yes the game has changed rapidly in the last 6 or 7 years. Still lots of room for SQL databases in this world though 😊
Thanks a lot , that's really brilliant !
I appreciate you taking the time to let me know you found it useful
Thank you ! Great content ! I was wondering, what are the drawbacks in using the "Append" function in the Combine section in Home tab?
No downsides as such, appended table column names need to match exactly ( including upper / lower case match ) if you want the columns to stack on top of each other. Otherwise new columns are created
@@AccessAnalytic Great! Thank you! So the data from same name columns will be appended on top of each other. Any additional or different name column will be added as a new column. These new column will have null values corresponding to the data set where these do not exist.
@@sulemanharoon - correct 😀
@@AccessAnalytic Thank you !
You and your videos are always a great help !
So does that mean 3 excel files. No matter what and how many columns there, they will all consolidate?
Sure does Mo Lo
This is great.
I'm trying to figure out how to add a step that skips the first few rows on each excel sheet
any ideas?
Use the Transform Sample File query to remove the first rows, then that automatically applies to each file
This worked. Thanks@@AccessAnalytic
Great, thanks for letting me know
I can't see Measurement6 from file C
Are you trying this out on your own computer Lasha? I don't know why Measurement6 wouldn't show
@@AccessAnalytic Ok, thankyou anyway its a very useful video
Why PQ didnt pull in all the headers in the editor from the folder? Is there a limitation of columns in the editor?
No limit, in this scenario (when consolidating files from a folder) it just pulls the headers from the first file when doing the expand.
Brilliant. Thank you!
You’re welcome 😀
Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you
You’re welcome 😀
Is there a way to adjust this for cases when you need to remove some rows and promote the first row to headers?
You should be able to do that in the Transform Ssmple file query. ua-cam.com/video/nPlrQUbEn4o/v-deo.html
Great solution if starting a new query but not so great if the query has been built already since removing all subsequent steps and data reverts the report back to almost source-level. Secondarily, though I've found a work-around to ensure the most recent Test CSV is included in the query folder, the 2 new test fields I created within the test CSV still are not populating even after performing [transform data] steps. Ugh, so frustrating.
Yeah retrofitting is always hard, I normally start to create a new query, get it to the same starting point as the old query then copy the code across in the advanced editor
@@AccessAnalytic Not a bad idea and an approach I may ultimately take. Problem is that I have approximately 20 separate monthly data sources, at least 50% of which contain 50K or more rows of data, so updating each of those files with this parameter won't be an extremely fast process.
Since I went through a similar exercise recently, focusing on back-end processes with very little improvement to the visuals during this time, I'll have to be more deliberate since clients typically want to see front-end improvements and sometimes don't understand the time and focus spent on back-end optimization.
Thanks so much for the help here. Love your tutorials!
@@ChrisSmithFW No worries
Great trick, very useful. Thank you very much¡¡¡¡
Cheers Arnau
Thank you!!!! Brilliant stuff.
Cheers Chris
Hi Wyn, why some of my headings are missing?
I’ll need more info thanks
This is great! Thank you!
You're welcome
I had a doubt. If we add a new column in any of the tables, will it get added in the consolidated table ???
Yes it will
Thanks! Just what I needed. Also got me thinking: if I knew that my files would only ever have different permutations of columns "Measurement1" to "Measurement6", could I create a dummy file in the folder with just these headings and no data then use it as the sample file? It wouldn't dynamically accommodate further columns but, with over 200 Excel files to combine and no new columns for the foreseeable future, it might work in my situation.
Yep that should work
This is an awesome trick . Would this work if the number of columns vary as well ?
Yep, also check out this video which has a simpler technique ua-cam.com/video/v7K4lnLXnhE/v-deo.html
@@AccessAnalytic thanks. I had skipped it because of the size but glad you reminded me.
All good, When you say size, do you mean length of video?
Does this work with CSV files? None of mine have tables and column names already.
Hi, yes it loves CSV files and converts them into Tables and columns
OMG, this is amazing, Wyn; thank you so much for this clever post (and of course, a huge thanks to Gil!)
One problem though - when I use the exercise files, everything works perfectly.
However, I have a problem right away when I use my own files where my column headings are dates and my rows contain names. The combined files are loaded into PQ with column headers as follows: Column1, Column2, etc., as if PQ didn't know that my dates were intended as column headers. Am I doing something wrong or is there a (hopefully, easy) solution to this?
Thank you so much.
Thanks, Yeah if your headings are numerical / date it won’t auto promote headers. So you’ll need to click the Use First row as header button early in the transform sample file step
@@AccessAnalytic What I did was create a dummy file with 1 dummy row and 1 dummy col where the header is simply 'dummy', i.e. non-numeric. When I include that dummy file into the folder, then make that the "First file" for the combine, everything works perfectly! Somehow, the dummy file was ignored altogether saving me the effort to remove it myself.
What a wonderful service you've provided, Wyn. You are the real deal - THANK YOU SO MUCH.
Glad you got it working!
Brilliant, thanks a lot for the video, just one simple question, what if i need data of a column to be considered just one time (columns to be considered all but one time)
Hope to hear from you
Thanks another time
I don’t understand sorry
@AccessAnalytic no problem
Here is an example:
In my work i receive each week an Excel file that contains some data of the current week and four previous weeks (for example this week I will receive a column named wk-32 and other columns of four pevious weeks : wk-31 wk-30 wk-29 wk-28)
Next week I Will receive (wk-33 wk-32 wk-31 wk-30 wk-29)
I use the method you shared it was vers helpful but I need the data of one column to be considered Just one time, for example i have sum of column wk-29 is 5000 dollars if the other Excel file contains aalso wk-29 I need to have only 5000 dollars not 10000.
I home I could explain more,
Thanks a lot,
@@mehdiabid8324 - I would use the Transform Sample file step to Unpivot Other Columns so that the individual week columns are a single column rather than separate ones. You can then apply a remove duplicates.
ua-cam.com/video/ESap6ptV8fI/v-deo.htmlsi=QtL9nMz4FoGXfhHs
Thanks a lot for your answer, that would really help.
Thanks a million! Amazing as ussual
Glad to help. Thanks for taking the time to leave a kind comment
Hi Whyn ,thanks for sharing your knowledge. In case the headers started in different positions (row 3, 4, 7) in each sheets and rows had to be canceled to promote as headers, what would the process be like, could you help me?
You'll need to apply some logic in the Transform Sample file step (that logic will be applied to each file ). Maybe you can apply a filter to remove the required rows, rather than specifying a specific count
Awesome video..👏 Do we have a function to combine tables please assist 🙏
Table.Combine ?
That's fantastic.
Thank you sooo much.
How can I do the same thing if there are multiple tables in multiple tabs in the same workbook file?
So do you want to consolidate multiple tables from multiple sheets from multiple workbooks, or just multiple sheets from 1 workbook? If it's just one workbook then check this out
ua-cam.com/video/n8_sA6NMlkA/v-deo.htmlsi=H83zOfBiRPGWuBzU
Great vdo. Could you tell me how to merge 2 tables with different column value but have the same ID? (But I wanna keep all the rows in the first table even though in the second table didn' t have any value). I tried using power query merge feature but the output always excluded the Null Value of the first table in the merged table. Thank in advance.
When you do the merge there is a drop down box set to Left Outer. If you want all rows from both tables then change it to Full Outer
@@AccessAnalytic I have tried several times and the merged file is still the same (excludes all all rows with null value from the first table). I still dun understand it.
@@4141-i7o so here's an example table A has 1 5 6 9 table B has 5 and 9. With table A selected go to merge and merge in table B. Expand columns. Table A now still has all original records, plus the columns from Table B for records 5 and 9.
What behaviour are you wanting using this example.
@@AccessAnalytic It worked now. Thanks you so much.
You’re welcome
Could you tell us the steps using power query to remove header first 2 rows and split into columns and then merge multiple files into one single file
You do those transformations in the query called “Transform sample file”
I go into details of the helper queries in this video:
Combining Multiple Files from a folder using Power Query in Excel or Power BI ( ⚠️see description )
ua-cam.com/video/nPlrQUbEn4o/v-deo.html
How can i combine and reshuffle though.
Not sure what you mean sorry
Hello i am having a problem when i combine this method and the one that you hace for speed up your excel connection to a sharefokder that you have 😢
What’s the issue? And which video are you referring too ( I’ve done a couple of similar ones )
Thanks. The files I have contain some cells filled with a colour that I would like to retain in the combined file (not via conditional formatting). Is there a way?
Not using Power Query
Wow!
In my case, I would like to use this trick but my data has headers that need to be merged from two rows.
Forexample, in the case on Measurement 1, Measurement 2, etc; suppose the data has another word like "Total weight" in a row just above "Measurement...." for all files.
Any ideas on how this would work out?
Thank you very much
It’s difficult to answer here but sounds like a situation where demoting the header, transpose, fill down and then merge the 2 headings . Then transpose again