Really nice explanation Murugan without creating any pipeline activities in Data Factory they can create automatic scripts and activities so much time reduced to finish the task and copying data from SQL Server to ADLS Thank you so much for sharing this information.
Thanks Ashok Glad that you liked it :) Feel free to share across. Also feel free to subscribe and hit bell icon for getting notifications on new videos Each week new video will be released based on Azure/ Power Platform based topics
Great video, now to do this programmatically not using the user interface. Generation rather than configuration to be truly metadata driven. When one wants to do this for multiple sources and hundreds of tables.
Thanks for the in-depth video. It was very helpful. If I have different sources (SQL, Oracle, DB2), how would you recommend making entry into the control table for each of these different sources?
If you have different sources you need multiple metadata driven pipelines. Properties like IR name, database type, file format type cannot be parameterized as on today. So you would need separate parameterized pipeline targeting each source. They all can use the same control table though and will store their own metadata info based on tables included This is documented here as well docs.microsoft.com/en-us/azure/data-factory/copy-data-tool-metadata-driven
Great demo. Thanks for sharing! Quick question...do you know if this preview feature will be able to support SAP Table Connector, as we're trying to load the data from SAP ECC? Thanks.
Sorry not too sure on that. I've not worked with SAP yet! But since it is supported as a source for Copy activity, I would expect it to work fine for Metadata-driven copy activity too.
Hi sir need you help Is there any other approach to go for the tables having foreign key relationships ? I am getting foreign key violation error message
@@dataplatformcentral3036 In the metadata drive we can select only the table name but it did not give the option to apply filter criteria. As an example: select * from emp where dept = 'HR' I want to include the where clause in the HR table. How can I achieve it?
@@ketanmehta3058 when you select the views,tables you want to copy, you are given the option to do a configure on the individual tables\views or use the same config for all, if you clicked on configure individual, there is an advanced expander, when you click it, you will see the query in use, which is generally 'select * from table' , you can edit this query however you want to. Hope this helps.
Really nice explanation Murugan without creating any pipeline activities in Data Factory they can create automatic scripts and activities so much time reduced to finish the task and copying data from SQL Server to ADLS Thank you so much for sharing this information.
Thanks Ashok
Glad that you liked it :)
Feel free to share across. Also feel free to subscribe and hit bell icon for getting notifications on new videos
Each week new video will be released based on Azure/ Power Platform based topics
Super Explanation Murugan .Thank you for the video
Thanks :)
Feel free to like, share and subscribe.
Make sure you press bel icon for getting notification on the new videos every week
Great video, now to do this programmatically not using the user interface. Generation rather than configuration to be truly metadata driven. When one wants to do this for multiple sources and hundreds of tables.
Glad that you liked it :)
Feel free to like, subscribe and share across
Nice explaination Murugan. Did you created any video on create databricks Lakehouse solution using ADF.
Nope
Havent worked much with databricks yet!
Wonderful, quick question, how does version control span out for the control table? Does it auto integrate with Azure repos? If yes how?
Sorry can you be specific. Version control of what?
Thanks for the in-depth video. It was very helpful. If I have different sources (SQL, Oracle, DB2), how would you recommend making entry into the control table for each of these different sources?
If you have different sources you need multiple metadata driven pipelines. Properties like IR name, database type, file format type cannot be parameterized as on today. So you would need separate parameterized pipeline targeting each source. They all can use the same control table though and will store their own metadata info based on tables included
This is documented here as well
docs.microsoft.com/en-us/azure/data-factory/copy-data-tool-metadata-driven
Great demo. Thanks for sharing! Quick question...do you know if this preview feature will be able to support SAP Table Connector, as we're trying to load the data from SAP ECC? Thanks.
Sorry not too sure on that. I've not worked with SAP yet!
But since it is supported as a source for Copy activity, I would expect it to work fine for Metadata-driven copy activity too.
Nice video
Thanks
Feel free to like, share and subscribe
Click bell icon to get notifications for new videos
Hi,
From where did you get the Sales table?
sales is a schema under which different tables exist within the test database. It's taken from Adventureworks sample database
Hi sir need you help
Is there any other approach to go for the tables having foreign key relationships ?
I am getting foreign key violation error message
Which step you're getting the error?
How we can add the where clause to the query?
WHERE clause to your our data extraction query or the control table one?
@@dataplatformcentral3036 In the metadata drive we can select only the table name but it did not give the option to apply filter criteria.
As an example: select * from emp where dept = 'HR'
I want to include the where clause in the HR table. How can I achieve it?
@@ketanmehta3058 when you select the views,tables you want to copy, you are given the option to do a configure on the individual tables\views or use the same config for all, if you clicked on configure individual, there is an advanced expander, when you click it, you will see the query in use, which is generally 'select * from table' , you can edit this query however you want to. Hope this helps.