Thankyou very much for taking the time to share your knowledge and create these videos. I have learned a huge amount from them and they have been essential in getting a tricky synapse solution over the line for a customer. You should consider advertising and selling your Professional Services on the channel as well - you clearly know all of this inside out
Gone through all your videos, these are very well explained and detailed. Helped to answer scenario based interview questions. Thanks Mam for this content. Looking for more videos on this. 👏👏
Hi, I have a scenario where a single table has multiple unique keys or composite keys. How do we match all the keys with the target table using key Column?
@@gotimukulshriya1864 yes store the keys in the form of col1, col2, col3 in a SQL table and dynamically point to this col having the key columns list and use createarray function @createarray(keycol)
Hi, I also have a scenario where I have few values of type decimal(38,18) on the source which I am copying into adls as a parquet file. In this case the column is converted to string. While copying this data to dsqlpool I am facing an issue saying “cannot load string to decimal(38,18). Do you have a work around for this?
Well done. I have a question that you may have thought through. I have 800+ tables in one of my databases of a system disbursed geographically at various locations. The tables are organized around accounts, parties, events, etc. Of the 800+ tables, only several hundred have system-initiated create or update fields (timestampcreate and timestampchange.) My Azure repository requires all tables to be present for reporting purposes. To develop virtual timestampcreate and timestampchange (rather than copying several million records each night), I've linked the tables in views to the nearest tip of the spear (for example, accounts) that does contain a timestampchange field. Do you think this is a strategy that you would suggest? Long and short, when the tip changes, the rest of the tree also changes.
very good content. thank you.. i have a scenario, where my source data wont have any updates, only new records will be there along with old data(without any update in records) for every run. in this case, if i do upsert based on key column then all existing records will be updated(although nothing to update) and new records will be inserted(which is fine). how to handle this scenario? i just want to avoid update only new data should get insert and rest should be ignored..
I had the exact same List1 error, { "errorCode": "2200", "message": "ErrorCode=UserErrorUnexpectedObjectTypeInPayload,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The value of property 'keys' is in unexpected type 'List`1'.,Source=Microsoft.DataTransfer.DataContracts,'", "failureType": "UserError", "target": "Src to PreStg", "details": [] } but when using @array the error stays the same. What could be the reason? I also have the key columns in a table that a lookup looks through.
I am getting the same error that states, azure data factorySql upsert key column '[ column_name]' does not exist in the table '[dbo].[InterimTable_c...]' In my framework I am using the Temp DB feature, any assistance will be appreciated, thanks.
Thankyou very much for taking the time to share your knowledge and create these videos. I have learned a huge amount from them and they have been essential in getting a tricky synapse solution over the line for a customer.
You should consider advertising and selling your Professional Services on the channel as well - you clearly know all of this inside out
Thankyou so much Paul for such a wonderful comment 😊
Excellent Annu, very well presented, thanks for investing your time and effort into this.
Thankyou for watching the video 😊 glad to know it helped
Superb!!!, i was looking same scenarios. This video help me a lot. Thanks. Keep posting..👍
Thanks for watching the video 😊 Glad to know it helped you
Gone through all your videos, these are very well explained and detailed. Helped to answer scenario based interview questions. Thanks Mam for this content. Looking for more videos on this. 👏👏
Thankyou so much for watching the videos and sharing feedback 😊 Glad to know it was helpful
Excellent work🙌
Thankyou 😊
This topic is what I just needed. Thank you so much !!!
Glad to know you found it helpful 😇 Thanks for watching
Hi, I have a scenario where a single table has multiple unique keys or composite keys. How do we match all the keys with the target table using key Column?
@@gotimukulshriya1864 yes store the keys in the form of col1, col2, col3 in a SQL table and dynamically point to this col having the key columns list and use createarray function @createarray(keycol)
Hi, I also have a scenario where I have few values of type decimal(38,18) on the source which I am copying into adls as a parquet file. In this case the column is converted to string. While copying this data to dsqlpool I am facing an issue saying “cannot load string to decimal(38,18). Do you have a work around for this?
great video, thanks, do you happen to know if the upsert supports a uniqueidentifier column as a key column?
Can we do deletes as well?
Also can we use source as on-premises SQL database and destination as Azure Synapse?
Well done. I have a question that you may have thought through. I have 800+ tables in one of my databases of a system disbursed geographically at various locations. The tables are organized around accounts, parties, events, etc. Of the 800+ tables, only several hundred have system-initiated create or update fields (timestampcreate and timestampchange.) My Azure repository requires all tables to be present for reporting purposes. To develop virtual timestampcreate and timestampchange (rather than copying several million records each night), I've linked the tables in views to the nearest tip of the spear (for example, accounts) that does contain a timestampchange field. Do you think this is a strategy that you would suggest? Long and short, when the tip changes, the rest of the tree also changes.
How to do upset in dataflow with 2 column as key ?
very good content. thank you..
i have a scenario, where my source data wont have any updates, only new records will be there along with old data(without any update in records) for every run. in this case, if i do upsert based on key column then all existing records will be updated(although nothing to update) and new records will be inserted(which is fine).
how to handle this scenario? i just want to avoid update only new data should get insert and rest should be ignored..
Go for incremental load
Great ✌️
Thanks!
I noticed Upsert still downloads all the rows so there is no time saving. Can you confirm?
I had the exact same List1 error, {
"errorCode": "2200",
"message": "ErrorCode=UserErrorUnexpectedObjectTypeInPayload,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The value of property 'keys' is in unexpected type 'List`1'.,Source=Microsoft.DataTransfer.DataContracts,'",
"failureType": "UserError",
"target": "Src to PreStg",
"details": []
}
but when using @array the error stays the same. What could be the reason?
I also have the key columns in a table that a lookup looks through.
Hi Ma'am. Is there a way we can integrate incremental models in dbt to azure synapse?
Sorry what is dbt
Hi could you please share the previous incremental. I am not able to follow the intital steps
Please watch 11th video of the playlist . Thankyou for watching
Thanks. I have few doubts. Do you have any email to contact. To perform upset do we require unique key column or primary key column.
@@rathikavenkatesh2510 Key column - it can be a single column or combination of columns to make it unique
I am getting the same error that states, azure data factorySql upsert key column '[ column_name]' does not exist in the table '[dbo].[InterimTable_c...]'
In my framework I am using the Temp DB feature, any assistance will be appreciated, thanks.
Hai, is there any way to update azure sql table rows to pg sql table using this concept ?
Assume source is csv file and sink is azure sql...And source data doesn't have any primary key column...in this case how to build upsert logic?
Then you need to use hashing concept to assign hash values to each records and match with target dataset. If it matches, update else insert.
I am trying to do insert and update incremental. Please share the complete steps . Thanks in advance