Thanks for the video Mark. The need to add comments which are not lost on query refresh is a problem that crops up often. So glad you found such a simple/straight forward way to handle this.
To keep from losing data should something change in the source file you can use a full outer join I believe - that way you won’t have the equivalent of an orphan record. I got that idea from Mynda Treacy. Also another way of doing this is to create your comment column or columns in the query as custom columns (use double quotes to create an empty string) that uses the table created from the first query. Then, like in this video after you merge, expand and load the data just go back into the query editor and delete in your second query all the steps where you created a custom columns in to contain your comment or other columns. Great video!
Thank you Mark for your usual support and information you provided to your followers 👍... Wishing you all the best in the next Excel Submit event in London 🙏
I would like to thank you again. I have tried and it works. And this will help me on daily basis and no need for me to track earlier manual inputs. Thank again
Brilliant! Thank you. Only issue I'm seeing is the second Comment column is coming back after I refresh the data. I deleted the Comment2 column after loading the query, but then it just reappears again. Any ideas would be great.
Thanks! This tutorial is definitely easier to understand. It helps me a lot. I hope MS will just give us the manual function in the future. So we dont need this hack anymore. Thanks for sharing!
Very useful, it helped me a lot. Thank you very much, Mark! I collect payments from clients and use two columns: one for the date and time of the call, the other for comments. It's normal to have multiple comments on different dates for the same table row. It would be great to have a separate query where the entire history of manual information is recorded and in the main query, in addition to the empty date and comment columns for new comments, have the last date and comment from the history query.
I’ve been using this technique since 2019 and can attest: this manual data finds itself in a bit of a closed loop, so it can easily be lost. It takes extra steps to get it back in again. Backups are your friends.
This is an excellent example of the adage, "Just because something CAN be done doesn't mean that it SHOULD be done." This is a hacky workaround that is so error prone that it is worse than the problem itself.
The only prone to error you are going to get if your data doesnt have unique ID to be linked. And unique ID is unlikely to change. But if you have a better method, please share.
This is so very satisfying! I've read and watched several tutorials, but this is by far the simplest and most understandable one. You are a very talented teacher and a credit to the business, Sir 🫡
Excellent video! Thank for posting! May I make a minor correction, at minute 3:22 when you add the Index column, you say "the index is from 0 to 3 because there's THREE rows in that first CSV file", it should be FOUR. Nevertheless, it does not affect the solution provided or the concept.
LOVE this video! I forgot how to do this and spent days trying to remember when you had this video all along. Just wondering though, how do we go about adding more columns later on after already making these queries and connections?
That is brilliant! I've been using xlookups and manual additions to the bottom of my file in a very clunky manner. Your technique will make it all so much better. Many thanks.
Thank you for the great easy-to-follow video. It worked great! The problem I am having is that when I sort the column or the list of rows changes because the data in the source (in my case Jira) changes, than the comments are no longer linked to the unique ID. Any tips you can provide to solve this problem?
Fantastic. I am new to Power Query and this is exactly what I was looking for. One additional thing though, is it possible to have a hyperlink in one of my manually entered columns? When I refresh, the link seems to be removed. Thanks again for these great tutorials!
Hi Mark- great video and something I’ve been using in 2 solutions for the last year- I use power automate to back them up each night! - I have another file that could use a self referencing table to allow comments be entered on the fly- however the final output is a pivot table, is it possible to ingest a pivot table back into power query?
A PivotTable is not a data object, so it can’t easily be loaded back into PowerQuery. You would need to play some advanced formula trickery with named ranges to ensure it included the full range of the PivotTable.
Nice workaround. I have files where I can use keywords to assign a category but 5% has to be assigned manually. I created a separate table with Unique and manual categories to fill in that last 5%. I feel like that slows down the query though. I’ll try this. You always have such practical and creative ideas!
Dude i've been looking for this for a while. I've learned most of this by deduction, but always had to control the imput order of files so i wouldnt lose my manual input order. Self referencing logic didnt make much sense to me but i got the jist of how to use it at least. I was wandering: is it possible to add some of the manual inputs automatically through a partial match with a dictionary table?
@exceloffthegrid thanks for the vid. I'm loading two external excel tables and combine those in a new query. I load that one and add a comment column. I insert this table with column as new query and merge the combined query with this one. Now the comment stays in place when adding a row in either of the two external tables. However, when i delete a row, the comment of the adjacent row below seems to disappear. Any options?
Super helpful!! Thankyou! I’m having a problem with duplicate values appearing in my table everytime I refresh, it duplicates name more and more every time 🤔 Do you have a solution for this??
So the commentary 2 column appears only on the first refresh and not on every refresh? If this get uploaded on sharepoint, wouldn't it return errors about access of the current excel file to its self (the table we are refreshing)?
The manual info is added using the Excel.CurrentWorkbook function, so will not be impacted by the file location. The original data may be impacted by placing onto SharePoint (i.e. the data is not available locally), but you would need to resolve that issue no matter what.
Sir, I thank you very much for this. I am reasonably fluent in PQ, but the possibility to add a 'comment' or a 'ToDo / manage later' bothered me for some time. I played along with your video with some IP-scan data (pretty unique in my home-network) but I found out that the procedure only works if the data is from an external source (e.g. csv) and not part of the same workbook, be it a range or an Excel Table. Could you please comment on if I'm correct on this? 🇳🇱
Before loading the data back into Excel. You would need to add a duplicate "Commentary" column. Let's call it "Previous Commentary" The Table would then show 2 columns "Previous Commentary" and "Commentary" with the same text. You could make changes to the "Commentary" column, but it's not until you click Data > Refresh All that it would then be loaded back into Power Query, and the two columns would be the same again.
Hi, I had this working where I had a formula in one of the manually added cells, I had to reconfigure it to add in a few more columns so basically started from scratch - now the formula deletes on refresh, but if I hardcode a comment it stays. any ideas?
But I always get column commentary 2 whenever i refresh the data ?/ Please help...i deleted the commentary 2 column but again i refresh another column it creates
Hi, I have data in power query from SharePoint rather than csv from folder. For me, it does not work. When I delete the original comentary in the table in excel, the power query breaks as there is no reference of the original commentary column referenced in main query. Any advise please? Thank you.
For some reasons, this does not work for me. I added a 'Judgement' column to capture user input. Added the table range again with the user input col -> Merged the queries together -> it created a Judgement2 col. which I have deleted. Now I make changes to Judgement but when I refresh, the 'Judgement2' col. reappears on the table. also my initial values in Judgement col. does not show against the unique ref. ID against which it was entered but appear in the Judgement2 col correctly. As rows increase or decrease in the data set the values in Judgement col. do not stick with the row with its unique ID. Just to add.. my source data is a single file on Sharepoint which will get replaced with a new file with the same structure but refreshed data. Some of the unique ref in my original file may reappear in the refreshed file while some may not.
how would you look up for the SAME record manually in changed file? I suppose there must be unique set of values in that row, like date+student+lesson+topic, therefore your unique compound key is this aggregated string
I just update the source data with the manual information, save it, open the query, and refresh the query. The data entered in the source data displays in the refreshed query.
From my lack of more than beginner query understanding, i feel stupid asking you this. But, i implemented this in our company and your hack has been working for 10 days now. Suddenly, our productions managers notes are gone. We are using index + production order ID as unique ID. My questions goes, even tho the index changes from time to time, will that result in loss of comments? From my basic undestanding, now matter the index (even tho it changes from 5 to 7) the "new" unique ID is stored in that manual data query and is loading back into my main query.. Is that correct, or will change in index result in loss of comments? However, the comments were there for days, but this morning all his were gone. Thank you in advance.
This video is about adding columns for additional data. This isn’t the solution you’re looking for. If you want to delete columns or rows, you need an entirely different approach.
That won't work in general because if your imported dataset gets longer, the index created in PBI is misaligned,etc. indexes have to be created in excel front end
Sir , I have a table from which data is coming from power query, and in it I an adding many columns with formula, when I am self refrencing it, my formulas are going away. Please provide me a solution on it.
I am a begginer, and I think I am doing something wrong. I am using SQL Data base as source, I did all the steps including the creating uniq ID, but any time I refresh the query, comments change their position and column Comments2 showes up again. What can be the issue?
@ExcelOffTheGrid Unfortunately, I work with data that can appear more than once within the column, and it changes everyday (value can change or rows can be deleted) For ID I used the columns with values which are not changing.
Fantatistic video. However, when doing exactly as you told, im getting millions of duplicates from my exisiting SQL table - to that extent that Excel tells me there is not enough place to store my data. How to solve this? And why am i getting this?
I am encountering the same issue. I am getting duplicates. So many. I have tried with different files but nothing. I have followed the steps. I don't know what I am doing wrong.
Is there an issue with this situation...when you update the query, if you add new Data from a previous period Excel will update the row data, but will not update the comments. The comments made in row 2, will remain there, but, row 2 now can have a different data from when the comment was made.
The unique reference is critical to this entire process. If your data already has a unique reference number, then make sure you use that and that. If you update a file, it will still work. If the data does not have a unique reference then file management is critical. If you can’t guarantee that a transaction will remain in the same location, then this process will not work.
It all hinges on having a unique reference number. Either the number is available in the source data or you need to find a way to create it. If your data already has a unique reference, sorting will be no problem. So, it's not the sorting which is the issue, it's whether sorting affects your ability to create the unique reference. If it does, it requires your skill to re-build the solution in a way where it doesn't.
@@ExcelOffTheGrid Sorting on its own is fine, its AFTER sorting AND refreshing is when the manual input starts to get misaligned. I have unique references and I confirmed by checking duplicates. Follow ur videos a couple times to check if i did it correctly, and every instance gave me the same outcome unfortunately.
@@kellylee5813 It maybe that you need to apply Table.Buffer to enforce the sort order (this is a known issue with sorting). Google it, you’ll find some posts about. Or it maybe that you’re loading to a data models. In which case the data order is optimised for the data model, where sorting is irrelevant.
FINALLY! Somebody has a solution to this! I've been thinking about this for YEARS!
Great news. I hope you can put it to good use.
I was in the same scenario, Thanks!!!!
Thanks for the video Mark. The need to add comments which are not lost on query refresh is a problem that crops up often. So glad you found such a simple/straight forward way to handle this.
Thank You. It takes a bit of care to ensure you don't lose information. But once that is sorted, it is a nice technique.
To keep from losing data should something change in the source file you can use a full outer join I believe - that way you won’t have the equivalent of an orphan record. I got that idea from Mynda Treacy. Also another way of doing this is to create your comment column or columns in the query as custom columns (use double quotes to create an empty string) that uses the table created from the first query. Then, like in this video after you merge, expand and load the data just go back into the query editor and delete in your second query all the steps where you created a custom columns in to contain your comment or other columns. Great video!
This is brilliant. I don't have immediate use for this but it's a great weapon to have in my arsenal!
This is in one word: “GREAT”. Thanks for sharing.
Thank you Mark for your usual support and information you provided to your followers 👍...
Wishing you all the best in the next Excel Submit event in London 🙏
Thank you. I'm glad you found it useful.
Not long until the Excel Summit now... exciting times. 😁
I would like to thank you again. I have tried and it works. And this will help me on daily basis and no need for me to track earlier manual inputs.
Thank again
Great news. Just make sure you’ve got backups, just in case the worst happens.
I have been looking for solution for this problem since long...thanks a lot...it will help me to make use of power query to next level...
You are most welcome 👍
Brilliant! Thank you. Only issue I'm seeing is the second Comment column is coming back after I refresh the data. I deleted the Comment2 column after loading the query, but then it just reappears again. Any ideas would be great.
Thank you so much!! That is great, it really helps me finally ti maintain several columns of comments. Great!
Wonderful; This is what I need in some of my working files.
Many thanks
You are welcome!
Thanks for this! Thought I’d have to use an access db to update data saved me quite a few hours!
Time saved - great news 😁
Thanks! This tutorial is definitely easier to understand. It helps me a lot. I hope MS will just give us the manual function in the future. So we dont need this hack anymore. Thanks for sharing!
Very useful, it helped me a lot. Thank you very much, Mark! I collect payments from clients and use two columns: one for the date and time of the call, the other for comments. It's normal to have multiple comments on different dates for the same table row. It would be great to have a separate query where the entire history of manual information is recorded and in the main query, in addition to the empty date and comment columns for new comments, have the last date and comment from the history query.
Uau! That's amazing! I've been looking for a solution like this for quite a while now. Mark, your solutions are brilliant! Thank you a million times!
Amazing! Glad I could help. 😁
Nice information. And, great tutorial with clear explanation. 😊
Glad it was helpful! 😁
As ever Mark, excellent. And rather cool too. Ask Dave :)
Thanks, I'm glad it was helpful.
Always ask Dave, he knows everything 😂
I’ve been using this technique since 2019 and can attest: this manual data finds itself in a bit of a closed loop, so it can easily be lost.
It takes extra steps to get it back in again. Backups are your friends.
That is the big caveat, as the manual info it doesn't really exist anywhere. So if anything goes wrong, it can easily disappear.
This is genius! Thanks ❤
Glad you like it!
great tutorial and probabaly the most easy one to follow .....thanks
Yay!!! Great news. 😁
This is an excellent example of the adage, "Just because something CAN be done doesn't mean that it SHOULD be done." This is a hacky workaround that is so error prone that it is worse than the problem itself.
What method would you use?
The only prone to error you are going to get if your data doesnt have unique ID to be linked. And unique ID is unlikely to change. But if you have a better method, please share.
This is so very satisfying! I've read and watched several tutorials, but this is by far the simplest and most understandable one. You are a very talented teacher and a credit to the business, Sir 🫡
Thank You, that is very kind of you to say. 😁
Excellent video! Thank for posting! May I make a minor correction, at minute 3:22 when you add the Index column, you say "the index is from 0 to 3 because there's THREE rows in that first CSV file", it should be FOUR. Nevertheless, it does not affect the solution provided or the concept.
I was hoping nobody would notice that. 🤭
I decided it made no difference so it wasn’t worth re-recording for. 😬
LOVE this video! I forgot how to do this and spent days trying to remember when you had this video all along. Just wondering though, how do we go about adding more columns later on after already making these queries and connections?
Excelente ejemplo, muy bueno el contenido que tienes en tu canal.
En tus curso de Power Query ofreces nivel avanzado?
Gracias
Our current Power Query course is for beginner / intermediate users.
We have got a more advanced course planned for later in the year.
Great. Thank you
You are welcome! 👍
That is brilliant! I've been using xlookups and manual additions to the bottom of my file in a very clunky manner. Your technique will make it all so much better. Many thanks.
You're very welcome! I hope you can put it to good use. 😁
Thank you for the great easy-to-follow video. It worked great! The problem I am having is that when I sort the column or the list of rows changes because the data in the source (in my case Jira) changes, than the comments are no longer linked to the unique ID. Any tips you can provide to solve this problem?
Does the data out of Jira not give you a unique reference number for each line?
Fantastic. I am new to Power Query and this is exactly what I was looking for. One additional thing though, is it possible to have a hyperlink in one of my manually entered columns? When I refresh, the link seems to be removed. Thanks again for these great tutorials!
Hyperlinks are formatting and not part of the value. Do, unfortunately not.
Hi Mark- great video and something I’ve been using in 2 solutions for the last year- I use power automate to back them up each night! - I have another file that could use a self referencing table to allow comments be entered on the fly- however the final output is a pivot table, is it possible to ingest a pivot table back into power query?
A PivotTable is not a data object, so it can’t easily be loaded back into PowerQuery.
You would need to play some advanced formula trickery with named ranges to ensure it included the full range of the PivotTable.
Thanks fir the video,what can we do if there is a formula in the manual columns?
Sorry, it won't work with a formula as it will hard code the result.
brilliant work Mark! Fan from Abu Dhabi!
You're welcome. Thanks for watching. 😁
Fantastic
Thank you so much 😀
Nice workaround. I have files where I can use keywords to assign a category but 5% has to be assigned manually. I created a separate table with Unique and manual categories to fill in that last 5%. I feel like that slows down the query though. I’ll try this. You always have such practical and creative ideas!
Having a separate Table will be more robust, but may not be as easy for users to update. I don't think there is a clear winner. Use what works.
Dude i've been looking for this for a while. I've learned most of this by deduction, but always had to control the imput order of files so i wouldnt lose my manual input order. Self referencing logic didnt make much sense to me but i got the jist of how to use it at least. I was wandering: is it possible to add some of the manual inputs automatically through a partial match with a dictionary table?
I'm not sure. I suspect it can be manual input or not manual input. I don't think it will work by mixing an automatic element into the cell.
@exceloffthegrid thanks for the vid. I'm loading two external excel tables and combine those in a new query. I load that one and add a comment column. I insert this table with column as new query and merge the combined query with this one.
Now the comment stays in place when adding a row in either of the two external tables. However, when i delete a row, the comment of the adjacent row below seems to disappear. Any options?
Super helpful!! Thankyou! I’m having a problem with duplicate values appearing in my table everytime I refresh, it duplicates name more and more every time 🤔 Do you have a solution for this??
If you’ve got duplicate values, then you havn’t worked out the unique record element which is critical for this method to work.
So the commentary 2 column appears only on the first refresh and not on every refresh?
If this get uploaded on sharepoint, wouldn't it return errors about access of the current excel file to its self (the table we are refreshing)?
The manual info is added using the Excel.CurrentWorkbook function, so will not be impacted by the file location.
The original data may be impacted by placing onto SharePoint (i.e. the data is not available locally), but you would need to resolve that issue no matter what.
finally an easy solution
It's as easy as I could make it, but it still takes a bit so skill and the right scenario to make it work. 👍
Sir, I thank you very much for this. I am reasonably fluent in PQ, but the possibility to add a 'comment' or a 'ToDo / manage later' bothered me for some time. I played along with your video with some IP-scan data (pretty unique in my home-network) but I found out that the procedure only works if the data is from an external source (e.g. csv) and not part of the same workbook, be it a range or an Excel Table.
Could you please comment on if I'm correct on this? 🇳🇱
Lovely hack. How do i keep the old comments and the ones on a seperate column?
Before loading the data back into Excel. You would need to add a duplicate "Commentary" column. Let's call it "Previous Commentary"
The Table would then show 2 columns "Previous Commentary" and "Commentary" with the same text.
You could make changes to the "Commentary" column, but it's not until you click Data > Refresh All that it would then be loaded back into Power Query, and the two columns would be the same again.
@@ExcelOffTheGrid thank you so much :)
Hi, I had this working where I had a formula in one of the manually added cells, I had to reconfigure it to add in a few more columns so basically started from scratch - now the formula deletes on refresh, but if I hardcode a comment it stays. any ideas?
I've tried this three times, and when I go to delete the manual entry, it comes back when I refresh. I can't identify the step I missed?
Hi! When i add a column to table (step 2) it doesnt read as part of the table, what should i do?
But I always get column commentary 2 whenever i refresh the data ?/ Please help...i deleted the commentary 2 column but again i refresh another column it creates
Hi,
I have data in power query from SharePoint rather than csv from folder. For me, it does not work. When I delete the original comentary in the table in excel, the power query breaks as there is no reference of the original commentary column referenced in main query. Any advise please?
Thank you.
1st comment with❤
Thank you. 😁
"Witchcraft! Burn the witch!"🤣
Presumably if the source files go resorted or removed from Files folder then this would break the unique reference?
If the source files are changed then, Yes you could lose the commentary, or could lead to errors.
For some reasons, this does not work for me.
I added a 'Judgement' column to capture user input.
Added the table range again with the user input col -> Merged the queries together -> it created a Judgement2 col. which I have deleted.
Now I make changes to Judgement but when I refresh, the 'Judgement2' col. reappears on the table. also my initial values in Judgement col. does not show against the unique ref. ID against which it was entered but appear in the Judgement2 col correctly. As rows increase or decrease in the data set the values in Judgement col. do not stick with the row with its unique ID.
Just to add.. my source data is a single file on Sharepoint which will get replaced with a new file with the same structure but refreshed data. Some of the unique ref in my original file may reappear in the refreshed file while some may not.
Does anyone know what to do if the csv file changes and doesn’t have a unique reference? For changing data like reports
how would you look up for the SAME record manually in changed file?
I suppose there must be unique set of values in that row, like date+student+lesson+topic, therefore your unique compound key is this aggregated string
I just update the source data with the manual information, save it, open the query, and refresh the query. The data entered in the source data displays in the refreshed query.
From my lack of more than beginner query understanding, i feel stupid asking you this. But, i implemented this in our company and your hack has been working for 10 days now. Suddenly, our productions managers notes are gone.
We are using index + production order ID as unique ID. My questions goes, even tho the index changes from time to time, will that result in loss of comments? From my basic undestanding, now matter the index (even tho it changes from 5 to 7) the "new" unique ID is stored in that manual data query and is loading back into my main query..
Is that correct, or will change in index result in loss of comments?
However, the comments were there for days, but this morning all his were gone.
Thank you in advance.
The reference is the most important thing.
If it changes, then you could lose comments or it could be placed against the wrong item.
Can we hide the reference number from appearing in the table which gets loaded on to excel?
You could hide it by hiding the column, or reducing the column width to 0. But you can't remove the reference from the Table.
This kind of works, but whenever I refresh it adds additional rows to my table and adds the columns I delete back to the sheet each time it refreshes.
This video is about adding columns for additional data.
This isn’t the solution you’re looking for. If you want to delete columns or rows, you need an entirely different approach.
It didn’t work out When I remove 2 files from the folder and refreshed it
Did your unique reference change? If so, then - No, it won’t.
It’s all hinges on unique references.
That won't work in general because if your imported dataset gets longer, the index created in PBI is misaligned,etc. indexes have to be created in excel front end
Sir , I have a table from which data is coming from power query, and in it I an adding many columns with formula, when I am self refrencing it, my formulas are going away. Please provide me a solution on it.
all your added columns with formulas must be added using Power Query
When I "Create Connection only" on "Manual Info Query", I did not get "Commentary2" in the original Query Table.
Am I missing something? Thanks
Probably, but I'm not sure what. Go through the steps slowly again.
If i delete something from the source, it deletes the other commentary.
If the unique reference changes, then Yes it will.
I am a begginer, and I think I am doing something wrong. I am using SQL Data base as source, I did all the steps including the creating uniq ID, but any time I refresh the query, comments change their position and column Comments2 showes up again. What can be the issue?
If you’ve got an SQL Database, I doubt you need to create a unique ref. Doesn’t the data already have a unique ref column?
@ExcelOffTheGrid Unfortunately, I work with data that can appear more than once within the column, and it changes everyday (value can change or rows can be deleted) For ID I used the columns with values which are not changing.
you can create a new query as Anti-join to see what comments were lost during last refresh
Fantatistic video. However, when doing exactly as you told, im getting millions of duplicates from my exisiting SQL table - to that extent that Excel tells me there is not enough place to store my data. How to solve this? And why am i getting this?
Are you sure you're merging on a unique reference with a Left Outer join?
@@ExcelOffTheGrid Thank you. Found out i had dupcilates on my unique id. Solved and thank you! :)
I am encountering the same issue. I am getting duplicates. So many. I have tried with different files but nothing. I have followed the steps. I don't know what I am doing wrong.
Is there an issue with this situation...when you update the query, if you add new Data from a previous period Excel will update the row data, but will not update the comments. The comments made in row 2, will remain there, but, row 2 now can have a different data from when the comment was made.
The unique reference is critical to this entire process.
If your data already has a unique reference number, then make sure you use that and that. If you update a file, it will still work.
If the data does not have a unique reference then file management is critical.
If you can’t guarantee that a transaction will remain in the same location, then this process will not work.
This doesnt work when you sort and refresh
It all hinges on having a unique reference number. Either the number is available in the source data or you need to find a way to create it.
If your data already has a unique reference, sorting will be no problem. So, it's not the sorting which is the issue, it's whether sorting affects your ability to create the unique reference. If it does, it requires your skill to re-build the solution in a way where it doesn't.
@@ExcelOffTheGrid Sorting on its own is fine, its AFTER sorting AND refreshing is when the manual input starts to get misaligned. I have unique references and I confirmed by checking duplicates. Follow ur videos a couple times to check if i did it correctly, and every instance gave me the same outcome unfortunately.
@@kellylee5813
It maybe that you need to apply Table.Buffer to enforce the sort order (this is a known issue with sorting). Google it, you’ll find some posts about.
Or it maybe that you’re loading to a data models. In which case the data order is optimised for the data model, where sorting is irrelevant.
DIDNT WORK, i followed your steps but after loading new data, my comments still gone to other rows....
Then you haven’t got your unique references working correctly.
other rows means rows order changed or comment to "April" file went to "May"?
DOES NOT WORKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA I CANT STAND THIS NO MORE, NOTHING WORKS
Do have a unique reference? If not, it can’t work. If you do, it will work.
record your video, publish to youtube. then somebody could spot your problem
Amazing