great explanation... explained in very easy way to understand the concept
Good one maheer along with add duplicate records form source and make some columns as scd type 1 and some as scd 2 for same table and also incremental load as new session
good explaination
Nice job. Please keep them coming. How About a video on SCD type 4 implementations
Good explanation. But I guess you forgot to add a check if there is any change in any one of the column coming from the source file. Because you'll update the row in target only if you find any change in the source and destination.
This is really good video and helpful too just one suggestion can you add record_create_date and record_expire_date and then upload ..It would be great..
Hello. How about doing it in sql server and not in query editor? Like doing mapping on Azure data factory but the result or the output will be seen in sql server.😊
In the update branch, instead of lookup and filter, we could replace it with join (inner)
Hi ,
How to implement incremental load using Primary key ? can you please explain it
Good explaination. What is i have duplicate rows in the source file? How do i filter them?
good one dude thanks for explaining .
@@WafaStudies Great explanation. How can I redirect the history (isActive=0) to a different table if I done want to keep history on same table. Do you have a video for this or can you create one. Thank you!
Nice technique, great job! One small nitpick ... I'd prefer if you used true() instead of 1==1 for your Alter Row Update policy :)
Yup. We are good to use true function as well. Idea is condition should return always true, so that update policy can be applied on all rows 😊
@WafaStudies I am facing a problem implementing scd2 using exist transformation instead of lookup u used here. But I guess the problem will be same for both the implementation. Here we need to make sure we are finishing the update inside the table first. If the new records are accidentally inserted in table first then lookup will fetch newly inserted columns also as matching and therefore all the columns are getting marked as non active. But the order of execution of the parallel streams are not in our hand. How to solve this? Any idea?
Great work Maheer,
How to load parquet file from on-premises to Azure SQL database using Azure Data factory
Hi Maheer sir, in the case of SCD 2 type, We can use inner join transformation. As it will only take the common rows from CSV file and SQL table, based on the primary key column, so then no need to apply filter transformation. I mean to say that instead of lookup transformation and then filtering transformation we can directly use inner join transformation, to simply.
Am I right? can we do so?
Create a branch from source use alter row to update the records in sink that are present in source and in the branch just use insert
May i know how the surrogate key is generated in dim table?
I was trying scd type2 using dataflows to make it dynamic , but on the frst run it is failing bcz I haven't choose the option of inspect schema to make it use for any delta table . Any workaround for this? The solution is atlst it should be able to read the header though the delta table is empty but am getting an error on the source side when the table is empty
Could you please tell me how your pipeline behave if you do not change anything. In my case, it is inserting a new row with isrecent=1 and changing the previous value isrecent =0, but As I am not changing anything then it should not be inserted again.
I have exactly the Same question.
In nothing changes it is not supposed to be added again.
How can we fix this?
For scd 1 instead of update we have to just delete that row and further activities are not required in sink2 , right?
Hi, if I was archiving rows in a database. During the copying process from one data base to another. I want to delete what ever I’m archiving from the source. Is there a place where I could write a query that does that instead of using alter rows etc because the expression builder is just not what I need
Hi Maheer, can we use Inner Join instead of lookup and filter ?
can we use scd2 in real time data ?
Can we do SCD Type 2 on Delta file using mapping data flow
Can you make a video in which you can include Start date and End date, and dynamically the dates getting updated for type-2 scd. I see that is a necessity and many people face this issue.
I see surrogate key is initially inserted for target record..but in source record surrogate key is not there, can you explain how surrogate key is mapped for the newly inserted records
you just don’t insert it, it will automatically add surrogate keys in sink as it is identity, if you add surrogate key in mapping it will fail
We did not check Md5 values for attributes whose employee id is already present in source and target…
I am getting this error. Cannot insert explicit value for identity column in table when identity_insert is set to off. Can any one help on this
Nice explanation WafaStudios..
I have a doubt is that how to handle the rows which are not having any updates in Source? With this example, even the unaffected data also will be updated in the destination unnecessarily.. Looking for your reply and thanks in advance..
Yes same doubt here. Could you please respond @WafaStudies as many people have this doubt
SCD type 2 was explained properly but one scinerio was not covered suppose we received same record from the source which is already present in the target. In that case also it will create new records and will update the old record as inactive under this logic.
+++ with this data flow, adf cannot recognise the old data, it literally gives 1 isactive for every row, would be better with staging I think
@WafaStudies , what if we get same column values from source as incremental, in that case isActive 0 and 1 both will be having true duplicates
In SSIS this is very very easy to accomplish, why is it still so cumbersome in ADF?
I have implemented as your explanation.. but i am facing an issue that, key column does not exist in sink...here is the screen shot.
Hi @WafaStudies at 14min10s I am trying to add the sink as source but the table is blank no column names or data, do you perhaps know why that is?
I am trying to add the sink as source in order to implement exist and lookup for SCD Type 2 but the source table (from the sink) is blank no column names or data, does anyone perhaps know why that is? How can I resolve this?
Hello Sir,
I want to switch my career from SQl Server developer, shoul i go through below playlist
1.Asure Basic
2.Azure Function
2.Asure Data Factory
Please suggest steps.
Sir it is not working, the values still remains 1 for all + it does not recognise the old data, it literally inserts all data
Good video but all the noise from the kids in the background was very distracting and loud.
Thank you. Sorry for that trouble. In old voices that might have happened. I am trying not to have any noise in all other and recent videos 🙂
@@WafaStudies Great explanation. How can I redirect the history (isActive=0) to a different table if I done want to keep history on same table. Do you have a video for this or can you create one. Thank you!
Great work Maheer, couple of observations
1. Type 2 dimension needs EffectiveStartDate / EffectiveEndDate too. If we add these columns updating all history rows will always reset these dates which fails type2 idea. Also, not good for performance , as we are always updating all history rows be it millions.
2. During 1st execution, we should have a capability of verifying although source has an entry for EmpId=1001 but is it really updated coz only in that case it make sense to INSERT and UPDATE history rows else we are simply duplicating rows with no changes.