I am 70 having no use of learning Excel/Power query! I keep watching your videos as I like the way you solve problems. I have learnt so much from your teaching. Can I pay some Token for whatever I have learnt.
I saved the entire syntax as follows: Add Custome Column =Table.PromoteHeaders(Table.Skip([Data], each not List.ContainsAny( Record.ToList(_), {"Header1", "Header2"} ))) Works wonders!! Thanks a ton bro!!
This is wonderful @Goodly. I watch all your videos. From the logic to the solution of the problem and the actual solution. God continually bless you, you are a messiah!
Thanks, Chandeep! I knew you had posted this, and I had this problem today. I was fooling around with different other approaches that were a mess. This was perfect!
Super Awesome, Chandeep. Very powerful formulas that you are teaching in a simple and easy understandable way ! Power Query and DAX are having lot of hidden treasures
Thanks again for such a great clear videobabout the next step in PowerQuery. I am new in PowerQuery, but i am experimenting on DAX and you are giving a great explanation
You hear the problem it seems😂 I was using filter method and removing null values and a lot of other filter method. Thanks for making the work easier and cleaner ❤
Thanks for the video. Much better than only skipping rows to one hardcoded value, makes sense to use if your column order is not the same across data tables.
this is awesome; thank you! one question - i need to add a column into the combined file that shows the original source filename for each record... where in the flow and how best to do that please?
Thanks @Chandeep, I got it based on your trick im able to do it for removal of top rows. I also wanted to do for bottom rows, there is lot of junk bottom rows from my sheet, i applied same trick but i have added index row with descending order then applied this trick for to remove junk rows from sheet and sort ascending order index back.. Anyways your tricks are fantastic.
This is awesome! I have one challenge, that one of the first rows contain the date of report and I want it to be within the data columns , how to do that :D!
Hi Goodly, Wonderful and Powerful Trick...Keep up the good work. What if you are importing from PDF files...Trying to convert the binary gives a different results.
Very sleek. I had this very same issue but I used List.Generate to loop through each record which suppose would take slightly more processing time but nothing you would ever notice.
Hi Goodly, That was an amazing video. I learned a lot from your videos for my daily tasks with excel. It saves lot of my time. God bless you. I have a question please if you can answer that, when I covert pdf to excel most of the column values are not aligned into 1 column but locate on either side. Ex: column B dates, should be in column B but on few rows it will be on A or C. How can I align them into just 1 column B. Please advise. Thank you for all your great videos. 🙏
Hello, trying to see if any one else ran into this issue: -Followed all steps and worked as they should until ~7:00 when we're supposed to transform the content from Binary to a Table. My files are all CSV so the formula I ended up using was =Table.TransformColumns(Source,{"Content",Csv.Document}) instead of =Table.TransformColumns(Source,{"Content",Excel.Workbook}) that is used in the video. It converted to Tables fine. -Then on the next step (7:40) where you're supposed to expand the tables and see all of the sheets, when I do this...the header options are just Column 1, Column 2, etc. instead of the actual headers...AND every row of all the files shows up instead of just the sheet name. -When I follow the steps afterwards, I get the error: "dataformat.error there were more columns in the result than expected" Any idea what's going on?
As always, very neat & clear stuff. 👍 I was wondering if one can't use Table.FindText? Like, each Table.Skip( _,Table.PositionOf(_, Table.FindText(_, "Profit"){0} ))) But only testing for 1 column header here.
Hi Goodly, thanks for all your great videos. Isn't there a simpler way to do it here? In the example file you create a conditional column (If Column1=Date Then True). Then you fill the conditional column downwards. Now you have a True for all rows you need and a null value above the desired header row. So you can filter for True. Shouldn't that be dynamic as well?
Pretty cool. Seems the only thing that would limit what one can do with PQ is one's imagination. Question, why List.ContainsAny instead of List.ContainsAll?
Hi Goodly. Thanks for all videos. They are just great. I need to combine two of your tricks in just one. I have many sheets with junk lines (same number of junk lines for all sheets) and these same sheets have inconsistent columns. How do I do that? Thanks in advance
You code to remove the junk headers can probably be Table.Skip ( Table, List.MatchesAny(Record.ToList(_), each _ "null" or each _ "") After this you can promote the headers and then follow the inconsistent header video.
Hi, It's very smart solution. I'm looking for instruction, how to combine tables and not lose the columns that exist in the previous steps, in this case I would like the Name column to remain?
@@GoodlyChandeep Thank You very much. This is what I am looking. Can Yoy say what You think about this solution: ua-cam.com/video/rCYn_onMP0I/v-deo.htmlsi=QYmkwRM2Cl1FuoCu
I have an interesting use case. I have a base file with headers but the fifth file has had one additional column with headers added. As a result the process you have described breaks and for the last table I get the error "DataFormat.Error: There were more columns un the result than expected" in the CSV data column. I have been wrestling with this for a week and my research does not show any ways to manage this, although an append tables would seem to manage this.
This may sound like a stupid question, and I'm sure it's something basic, but why do you get Name = Sheet , Data = Table, but I always have Name= table , Name = Sheet and adjacent Data = table, Data = table ? Oh, and loved the use of a condition for skip which I'd never thought of, even though now having looked it does say count or condition.
Not as sophisticated, but say you had a table with two columns, Col1 and Col2. There are junk rows at the top. The row with headers has the values "Date" and "Amount". This seems for work and is easy to implement. let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], Custom1 = Table.Skip(Source, each [Col1] "Date" and [Col2] "Amount") in Custom1
fnTableSkipDynamic(Source, "Col1", "Date", "Col2", "Amount") Function to do this: Input column names as text, header values as text. (sourcetablename as table, col1name as text, header1value as text, col2name as text, header2value as text)=> let return = Table.Skip(sourcetablename, each ( Record.Field(_, col1name) header1value) and ( Record.Field(_, col2name) header2value) )
could you possibly tell me what do i have to do with CSV files for "Table.TransformColumns(Source, {"Content", Excel.Workbook})" as that dosent work for CSV
Fabulous video , but it is difficult to see the "Applied Steps" at the end of the video. Question: what if the junk rows and "junk row data" are spread inconsistently through the spreadsheet. For example, the fund or department information may change resulting in blank rows between datasets and header rows when a new fund source or department is identified in the report. The example shows how to remove junk rows at the top of the report for 3 reports. Do you have any videos where the junk rows would be sporadically located throughout the report?
Mark, I'll have to take a look at the data to give you possible ways to solving it. See if you can pick any tricks from this long video - ua-cam.com/video/_ZKT1raC4P0/v-deo.html I've shared horizontal and vertical looping techniques in this video.
@@GoodlyChandeep - Hi Chandeep, I could share a file with you on your website. I could upload on the PQ training course site. Would that work? Thanks for the great insights!
Hi i stuck on the step transform column. My file is .csv file not xlsx so when i use transformcolumn, it show error on the content. Do you know how to fix it
Wow, this came up in my feed and it’s exactly what I’ve struggling with all week!!! You’re an absolute star, thank you! 🙌🏻☺️👌🏻
I am 70 having no use of learning Excel/Power query! I keep watching your videos as I like the way you solve problems. I have learnt so much from your teaching. Can I pay some Token for whatever I have learnt.
I saved the entire syntax as follows:
Add Custome Column =Table.PromoteHeaders(Table.Skip([Data], each not List.ContainsAny( Record.ToList(_), {"Header1", "Header2"} )))
Works wonders!!
Thanks a ton bro!!
Pretty awesome! Thanks a lot for this. Record, Table and List object manipulation in one video for one task without using "Remove Other Columns"
M-Masterpiece!
Packing a lot of slick tricks in one video.
Thank you Chandeep!
Your methods become so refined over time . Awesome job
Thank you, SO MUCH! Had about 200 files to combine with various junk rows up top and now I can do it :D
Woah.. thats a lot of files.
I am glad I could help
This is wonderful @Goodly. I watch all your videos. From the logic to the solution of the problem and the actual solution. God continually bless you, you are a messiah!
Thanks, Chandeep! I knew you had posted this, and I had this problem today. I was fooling around with different other approaches that were a mess. This was perfect!
Wow..Amazing. Been struggling with removing dynamically the junk and with custom headers for a while now. This works like a charm. Thanks a mill.
It is a huge pleasure to look at your videos. Moving from excel advanced user to Power query person. Thanks a lot.
Thanks Chandeep! I was using Index for this one, but you make it so easy. Learning from your videos is amazing! Keep up the good work!
Dear, you are a genius. You make M language look so easy. I appreciate your videos, my respects to you.
You have no idea how much Power Query has helped me to automate my tasks. Also I have been struggling with this problem. A big THANK YOU ❤!
Happy to help! 😉
This is a lifesaving technique. Thank you for sharing with us.
Beautiful Power Query techniques!!
Super Awesome, Chandeep.
Very powerful formulas that you are teaching in a simple and easy understandable way !
Power Query and DAX are having lot of hidden treasures
I am actually learning power query its Excellent. I like the way to teach. Thank u so, Much for this video.
Amazing, I've done this with a fixed Skip value but this is on another level! Thanks
yeah that is pretty damn awesome, Chandeep. this is an everyday challenge
Amazing, I have had to struggle through with this exact issue to manually remove those junk rows, your a life saver I will be using this in the future
Fantastic! This is definitely going into my daily routine.
Enjoy! 😉
Fantastic Chandeep, thank you!
Glad you liked it !
Marvelous work ji
Brilliant! Many thanks, Mr Goodly.
Awesome. Great logic. Thanks for the video.
Thanks again for such a great clear videobabout the next step in PowerQuery. I am new in PowerQuery, but i am experimenting on DAX and you are giving a great explanation
ooff!! SHABASH! Terrific video Chandeep! Superb!! Sixer Maar diya!
Thanks 😉
You hear the problem it seems😂
I was using filter method and removing null values and a lot of other filter method.
Thanks for making the work easier and cleaner ❤
Fantastic video and amazing explanation!
Many thanks :)
Very well explained thanku so much brother.
Awesome video. I was struggling earlier. I had work by using macro. This is very cool. Thanks Goodly
Thanks for the video. Much better than only skipping rows to one hardcoded value, makes sense to use if your column order is not the same across data tables.
This was awesome video. Thanks for the same. I liked the trick that you used to removed the blanks dynamically.
Great , excellent. Simple like that. Thanks.
Amazing!!!, Greetings from Mexico.
Awe Stucked... No Words to Express How Fantabulous It is
Thanks Rahul :)
This is great! What is the best way to do this when your source files are not formatted as Tables, but are simply Excel Worksheets?
Super Video Chandeep.
this is awesome; thank you! one question - i need to add a column into the combined file that shows the original source filename for each record... where in the flow and how best to do that please?
Thanks @Chandeep, I got it based on your trick im able to do it for removal of top rows. I also wanted to do for bottom rows, there is lot of junk bottom rows from my sheet, i applied same trick but i have added index row with descending order then applied this trick for to remove junk rows from sheet and sort ascending order index back.. Anyways your tricks are fantastic.
This is awesome! I have one challenge, that one of the first rows contain the date of report and I want it to be within the data columns , how to do that :D!
Magosh... It's just simple brilliant!💎
Thanks a bunch for yor priceless help!🤗👦
Thanks :)
Genius 🔥 thank you my friend sooo helpful ❤️❤️❤️
Actually, I think you are the only youtuber instructor who is preparing depth & creative PQ examples.
Really, fantastic 👏, elegance & more simple 😊
really very helpful. Thanks . It is a good idea
Hi Goodly, Wonderful and Powerful Trick...Keep up the good work. What if you are importing from PDF files...Trying to convert the binary gives a different results.
Another way, how we can do it, it is add additional column using List.PositionOf and due to that calculate in which position we have Date and Profit
admiring the brilliance
Very sleek. I had this very same issue but I used List.Generate to loop through each record which suppose would take slightly more processing time but nothing you would ever notice.
Incredible. Thanks :)
Hi Goodly,
That was an amazing video. I learned a lot from your videos for my daily tasks with excel. It saves lot of my time. God bless you.
I have a question please if you can answer that, when I covert pdf to excel most of the column values are not aligned into 1 column but locate on either side.
Ex: column B dates, should be in column B but on few rows it will be on A or C.
How can I align them into just 1 column B.
Please advise.
Thank you for all your great videos. 🙏
its very awesome, i hade a similar issue had to work around it, but this looks pretty good
Fantastic 🎉.. Thanks 😊
Hello, trying to see if any one else ran into this issue:
-Followed all steps and worked as they should until ~7:00 when we're supposed to transform the content from Binary to a Table. My files are all CSV so the formula I ended up using was =Table.TransformColumns(Source,{"Content",Csv.Document}) instead of =Table.TransformColumns(Source,{"Content",Excel.Workbook}) that is used in the video. It converted to Tables fine.
-Then on the next step (7:40) where you're supposed to expand the tables and see all of the sheets, when I do this...the header options are just Column 1, Column 2, etc. instead of the actual headers...AND every row of all the files shows up instead of just the sheet name.
-When I follow the steps afterwards, I get the error: "dataformat.error there were more columns in the result than expected"
Any idea what's going on?
excellent🙂
JUST AMAZAING SUPERB
Really awesome!
Great video Goodly ! what if I have a junk rows and also a junk columns. is it possible to combine? Thanks
Thanks Patrick.
May this video will help ua-cam.com/video/1fn8fXYw6M4/v-deo.html
Fantastic! But can we use "is blank" instead of "not Contain any" as the condition?
Or promote headers if record contains any " Date", "Amount" etc?
I would say simply amazing !!!!
Delighted, this is the problem of every hour.
Many times data come with merged header, which you have sorted already,
Could we have similar logic for bottom rows?
Amazing❤! 🎉
unbelievable crazy as usual.
As always, very neat & clear stuff. 👍
I was wondering if one can't use Table.FindText?
Like, each Table.Skip( _,Table.PositionOf(_, Table.FindText(_, "Profit"){0} )))
But only testing for 1 column header here.
Can this also be solved by using index number and custom function?
Amazing
Thanks alot
Wow ! thank you
Hi Goodly, thanks for all your great videos. Isn't there a simpler way to do it here? In the example file you create a conditional column (If Column1=Date Then True). Then you fill the conditional column downwards. Now you have a True for all rows you need and a null value above the desired header row. So you can filter for True. Shouldn't that be dynamic as well?
No. In that way if u filter true it will be with junk rows and all data except headers and if u filter for false u will get only headers
Pretty cool. Seems the only thing that would limit what one can do with PQ is one's imagination. Question, why List.ContainsAny instead of List.ContainsAll?
Thanks!
Thanks a lot Abhijit :)
excellent
Hi Goodly.
Thanks for all videos. They are just great.
I need to combine two of your tricks in just one.
I have many sheets with junk lines (same number of junk lines for all sheets) and these same sheets have inconsistent columns.
How do I do that?
Thanks in advance
You code to remove the junk headers can probably be
Table.Skip ( Table, List.MatchesAny(Record.ToList(_), each _ "null" or each _ "")
After this you can promote the headers and then follow the inconsistent header video.
Hi, It's very smart solution. I'm looking for instruction, how to combine tables and not lose the columns that exist in the previous steps, in this case I would like the Name column to remain?
you'll find the answer in this video
m.ua-cam.com/video/oExuBdnHtrk/v-deo.html
@@GoodlyChandeep Thank You very much. This is what I am looking. Can Yoy say what You think about this solution: ua-cam.com/video/rCYn_onMP0I/v-deo.htmlsi=QYmkwRM2Cl1FuoCu
Thanks for sharing!
But looks like there should be List.ContainsAll instead of List.ContainsAny
Was thinking the same.
Brilliant
I have an interesting use case. I have a base file with headers but the fifth file has had one additional column with headers added. As a result the process you have described breaks and for the last table I get the error "DataFormat.Error: There were more columns un the result than expected" in the CSV data column.
I have been wrestling with this for a week and my research does not show any ways to manage this, although an append tables would seem to manage this.
send me your sample data and description of the problem - goodly.wordpress@gmail.com
Awesome!!!
This may sound like a stupid question, and I'm sure it's something basic, but why do you get
Name = Sheet , Data = Table, but I always have Name= table , Name = Sheet and adjacent
Data = table, Data = table ? Oh, and loved the use of a condition for skip which I'd never thought of,
even though now having looked it does say count or condition.
Damn awesome is right 👍
Amazing
very nice
For me my question if we need to bring the data before the columns as a new column before promoting headers, how do we go about it
Power query is magic ! You are a wonderful magician 🪄
Great!
Token of Gratitude!
Thank you so much Ankur
Not as sophisticated, but say you had a table with two columns, Col1 and Col2. There are junk rows at the top. The row with headers has the values "Date" and "Amount". This seems for work and is easy to implement.
let
Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
Custom1 = Table.Skip(Source, each [Col1] "Date" and [Col2] "Amount")
in
Custom1
fnTableSkipDynamic(Source, "Col1", "Date", "Col2", "Amount")
Function to do this: Input column names as text, header values as text.
(sourcetablename as table, col1name as text, header1value as text, col2name as text, header2value as text)=>
let
return = Table.Skip(sourcetablename,
each
( Record.Field(_, col1name) header1value)
and
( Record.Field(_, col2name) header2value)
)
in
return
List.matchesall can look for both the headers. 👍
I have a similar problem with a CSV file, there are title characters before the Column headings. How could I remove those?
could you possibly tell me what do i have to do with CSV files for "Table.TransformColumns(Source, {"Content", Excel.Workbook})" as that dosent work for CSV
What if we want to add back the removed rows after promoting the headers
Fabulous video , but it is difficult to see the "Applied Steps" at the end of the video. Question: what if the junk rows and "junk row data" are spread inconsistently through the spreadsheet. For example, the fund or department information may change resulting in blank rows between datasets and header rows when a new fund source or department is identified in the report. The example shows how to remove junk rows at the top of the report for 3 reports. Do you have any videos where the junk rows would be sporadically located throughout the report?
Mark, I'll have to take a look at the data to give you possible ways to solving it.
See if you can pick any tricks from this long video - ua-cam.com/video/_ZKT1raC4P0/v-deo.html
I've shared horizontal and vertical looping techniques in this video.
@@GoodlyChandeep - Hi Chandeep, I could share a file with you on your website. I could upload on the PQ training course site. Would that work? Thanks for the great insights!
@@GoodlyChandeep Perfect! Thank you so much Chandeep. Your help and guidance is greatly appreciated. Cheers!
Goodly is just too Godly.
Hi i stuck on the step transform column. My file is .csv file not xlsx so when i use transformcolumn, it show error on the content. Do you know how to fix it
If the name & kind of data which extracted into PQ it's inconsistent and i need to filter out for all non needed sheets,
How can work with that sheet
How to do same thing but fro CSV file
What if i need the source.file column?
Great
Hi. Can you give me away of doing this if there is more than one worksheet in the excel file and you only want to clean just one worksheet?
If you're connecting to a single xl file, you'll have to apply a filter before the Navigation Step to restrict the sheet names that you want.
amzing