Worked flawlessly. I went to a few other sources to see how this is done; this was by far the simplest way to name columns by position. Thank you so much for your content.
Would there be any reason why a query slows down significantly when using this same type of reference to remove a column instead of renaming? My dataset is about 16,000 rows.
Hello, good day to you, sir. I just can't put it in words to express my absolute gratitude towards your content and flow. Perhaps your delivery simply suits my brain's input wavelength, datatype, or whatever, but I really find it fascinating as I don't usually come and comment as I watch same other videos (though I do like and subscribe for support) 😊 so thank you so very much! In fact, I even watched your other contents even though I knew from the main titles they might not have anything that I could find useful or new inputs. And I was wrong as I myself suspected, there's always something and some things in your every video. Unique. Different. Of-another-angle. Rare. Unthinkable. Direct. Every bit of every step is complete and just enogh for anybody to understand, or at least for someone like me who has no data or computer background studies to consume from your lecture. Anyway, correct me if I'm wrong, the data table that I would end up getting at the end would always be only those from the latest file that someone generated and dumped into the source folder, correct? The settings applied would replace the old entries for the new ones. What if, other than the entries from latest file, I would also like to retain some of those entries from the old files in that folder? And what if, there are duplicate entries, for example, someone already generated and dumped total of today's entries file into the folder, and then someone else did the same so there will be some overlaps with duplicate entries. I can think of using combine functions and then somehow maybe use filter to get a unique list and remove duplicates. Would this be the most optimized way to ensure accuracy and speed are at their best? What are the alternatives? Not sure if I presented my enquiries to be valid and understandable. Just imagining some scenarios and probably trying to come up with the most efficient solutions.
WOW, Hanif!!! Your comments are probably the nicest thing anyone has ever said to me as a teacher. I really don’t know what to say except, “thank you, thank you, thank you”. You have made my day, week, month, and year. Although this channel is still in its infancy (less than 1 year old), knowing that the videos are helping you to this degree makes all the work worthwhile. Your kind words make me want to keep going and produce videos of the highest quality and value that I can. Thank you so much for your viewership and support. Regarding your “data refresh” question: yes, the “old” data would be replaced with the “new” data. Power Query has many mechanisms in place for bringing in data in an additive way as opposed to a replacement way. It depends on the data source and its structure as to how this is done. The is no single solution. As far as the “duplicate entry” issue is concerned, there are also many ways to reject/filter duplicate records. It all boils down to the aforementioned data source and structure. It’s difficult to provide an exact point-by-point set of steps without seeing the data, but it can always be done. Thanks again for your very kind words. Cheers.
The sample files have been added as a download link in the video description. Thank you for taking the time to watch. (Did you see today's video? Cool stuff.)
Worked flawlessly. I went to a few other sources to see how this is done; this was by far the simplest way to name columns by position. Thank you so much for your content.
Thank you for saying so. We really appreciate your viewership.
GOOD KNOWLEDGE KINDLY UPLOAD MORE VIDEOS ON POWER QUERY
I definitely will.
Excellent as always. Well explained and Crytal clear. Thank you.
Thank you for taking the time to watch and say such nice things.
Just what I was looking for. Thanks
Glad to see that it helped.
Would there be any reason why a query slows down significantly when using this same type of reference to remove a column instead of renaming? My dataset is about 16,000 rows.
Not that I'm aware of. I'd need to see more detail of your data and M code processing to perform testing.
Hello, good day to you, sir.
I just can't put it in words to express my absolute gratitude towards your content and flow. Perhaps your delivery simply suits my brain's input wavelength, datatype, or whatever, but I really find it fascinating as I don't usually come and comment as I watch same other videos (though I do like and subscribe for support) 😊 so thank you so very much! In fact, I even watched your other contents even though I knew from the main titles they might not have anything that I could find useful or new inputs. And I was wrong as I myself suspected, there's always something and some things in your every video. Unique. Different. Of-another-angle. Rare. Unthinkable. Direct. Every bit of every step is complete and just enogh for anybody to understand, or at least for someone like me who has no data or computer background studies to consume from your lecture.
Anyway, correct me if I'm wrong, the data table that I would end up getting at the end would always be only those from the latest file that someone generated and dumped into the source folder, correct? The settings applied would replace the old entries for the new ones.
What if, other than the entries from latest file, I would also like to retain some of those entries from the old files in that folder? And what if, there are duplicate entries, for example, someone already generated and dumped total of today's entries file into the folder, and then someone else did the same so there will be some overlaps with duplicate entries. I can think of using combine functions and then somehow maybe use filter to get a unique list and remove duplicates. Would this be the most optimized way to ensure accuracy and speed are at their best? What are the alternatives?
Not sure if I presented my enquiries to be valid and understandable. Just imagining some scenarios and probably trying to come up with the most efficient solutions.
WOW, Hanif!!! Your comments are probably the nicest thing anyone has ever said to me as a teacher. I really don’t know what to say except, “thank you, thank you, thank you”. You have made my day, week, month, and year. Although this channel is still in its infancy (less than 1 year old), knowing that the videos are helping you to this degree makes all the work worthwhile. Your kind words make me want to keep going and produce videos of the highest quality and value that I can. Thank you so much for your viewership and support.
Regarding your “data refresh” question: yes, the “old” data would be replaced with the “new” data. Power Query has many mechanisms in place for bringing in data in an additive way as opposed to a replacement way. It depends on the data source and its structure as to how this is done. The is no single solution.
As far as the “duplicate entry” issue is concerned, there are also many ways to reject/filter duplicate records. It all boils down to the aforementioned data source and structure. It’s difficult to provide an exact point-by-point set of steps without seeing the data, but it can always be done.
Thanks again for your very kind words. Cheers.
Great technique. Many thanks!
Thank you!
You're welcome!
Thank you
Thank YOU for taking the time to watch and comment!!!
Thank you.
Thank YOU for watching.
Great!
Glad you found it helpful. Cheers!
nice, put link excel file in the description
The sample files have been added as a download link in the video description. Thank you for taking the time to watch. (Did you see today's video? Cool stuff.)