Azure Data Factory | Copy multiple tables in Bulk with Lookup & ForEach

Поділитися
Вставка
  • Опубліковано 20 січ 2025

КОМЕНТАРІ • 364

  • @rapchak2
    @rapchak2 3 роки тому +30

    Cannot thank you enough for your incredibly well laid out, thorough explanations. The world needs more folks like you :)

  • @satishutnal
    @satishutnal 4 роки тому +13

    You are the example for how teaching should be. Just awesome 👍

  • @genniferlyon8577
    @genniferlyon8577 2 роки тому +3

    Thank you Adam! I had been trying to follow some other written content to do exactly what you showed with no success. Your precise steps and explanation of the process were so helpful. I am successful now.

  • @priyankapatel9461
    @priyankapatel9461 3 роки тому +1

    You have depth knowledge in every service. I learn from scratch using your channel. Keep posting Thanks you and God bless you.

  • @albertoarellano1494
    @albertoarellano1494 4 роки тому +13

    You're the best Adam! Thanks for all the help, been watching your tutorials on ADF and they're very helpful. Keep them coming!

  • @quyenpn318
    @quyenpn318 3 роки тому +4

    I really really like how you guide step by step like this, it is quite easy to understand. You are the best “trainner” I’ve seen, really appreciated for your time on creating those useful videos.

  • @shaileshsondawale2811
    @shaileshsondawale2811 2 роки тому

    What a wonderful content you have place in social media.. What a world class personality you... People certainly fall in love with your teaching..

  • @apogeeaor5531
    @apogeeaor5531 8 місяців тому

    Thank you, Adam. I rewatch this video at least twice a year, Thank you for all you do.

  • @pratibhaverma7857
    @pratibhaverma7857 3 роки тому

    Your videos are great. This is the best channel on UA-cam platform to learn about ADF. THANKS 🙏😊

  • @sreejeshsreenivasan2257
    @sreejeshsreenivasan2257 5 місяців тому

    Super helpful . We are breaking our head on how to migrate 32,000 Oracle tables into ADL. this was so simple and helpful

  • @amoldesai4605
    @amoldesai4605 4 роки тому

    I am a beginner in Azure Data Engineering and you made it simple to learn all the tactics.. thanks

  • @gunturulaxmi8037
    @gunturulaxmi8037 2 роки тому

    Videos are very much clear to the people who would like to learn and practice.Thanks alot.your hard work is appreciated.

  • @paullevingstone4834
    @paullevingstone4834 3 роки тому +1

    Very professionally demonstrated and very clear to understand. Thank you very much

  • @jakirajam
    @jakirajam 2 роки тому

    The way you explain is super Adam. Really nice

  • @maimemphahlele1102
    @maimemphahlele1102 3 роки тому +1

    Hi Adam
    Ur videos are just too brilliant. This is subscription I wouldn’t mind paying to support. Ur lessons are invaluable to learning.

  • @garciaoscar7611
    @garciaoscar7611 Рік тому

    This video was really helpful! you have leveled up my Azure skills, Thank you sir, you have gained another subscriber

  • @naseemmca
    @naseemmca 4 роки тому +1

    Adam you are just awesome man! The way you are teaching is excellent. Keep it up.. you are the best...

  • @Ro5ho19
    @Ro5ho19 4 роки тому +2

    Thank you! It's under appreciated how important it is to name things something other than "demoDataset", but it makes a big difference both for understanding concepts, and maintainability.

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому +2

      Glad it was helpful! You are of course correct, if it's not demo then take care of your naming conventions.

  • @Vick-vf8ug
    @Vick-vf8ug 3 роки тому +1

    It is extremely hard to find information online about this topic. Thank you for making it easy!

  • @waseemmohammed1088
    @waseemmohammed1088 4 роки тому +1

    Thank you so much for the clear and nice explanation, I am new to ADF and learning a lot from your channel

  • @markdransfield9520
    @markdransfield9520 Рік тому

    Brilliant teaching style Adam. Very watchable. I particularly like how you explain the background. I've subscribed and will watch more of your videos.

  • @anderschristoffersen2513
    @anderschristoffersen2513 4 роки тому +2

    Great and simple walk through, good job Adam

  • @wouldyoudomeakindnes
    @wouldyoudomeakindnes 3 роки тому +1

    your skills are in the tops thanks, love to see your channel grow

  • @ahmedroberts4883
    @ahmedroberts4883 2 роки тому

    Excellent, Excellent video. This has truly cemented the concepts and processes you are explaining in my brain. You are awesome, Adam!

  • @CoolGuy
    @CoolGuy 2 роки тому

    You are a legend. Next level editing and explanation

  • @frenamakenson9844
    @frenamakenson9844 5 місяців тому

    Hello Adam,
    thanks for this demo, Your Channel is A bless for new learner

  • @xiaobo1134
    @xiaobo1134 3 роки тому +1

    Thanks Adam, your tutorials are very useful, hope to see more in the future

  • @amtwork5417
    @amtwork5417 3 роки тому +1

    Great video, easy to follow and to the point, really helped me to quickly get up a running with data factory.

  • @sumanthdixit1203
    @sumanthdixit1203 4 роки тому +1

    Fantastic clear-cut explanation. Nice job!

  • @ericsalesdeandrade9420
    @ericsalesdeandrade9420 6 місяців тому

    Amazing video. Complex topic perfectly explained. Thank you Adam

  • @anacarrizo2209
    @anacarrizo2209 Рік тому

    THANK YOU SO MUCH for this! The step-by-step really helped with what I needed to do.

  • @anilchenchu1017
    @anilchenchu1017 Рік тому

    Awsome adam there cant be a way to explain better than this

  • @RavinderApril
    @RavinderApril 9 місяців тому

    Incredibly simplified to learn. .. Great!!

  • @santanughosal9785
    @santanughosal9785 2 роки тому

    I was looking for this video. Thanks for making this. It helps a lot. Thanks again.

  • @rajanarora6655
    @rajanarora6655 3 роки тому

    Awesome explanation, the way you teach assuming in layman terms is pretty great, thanks!!

  • @radiomanzel8570
    @radiomanzel8570 2 роки тому

    it was so perfect , I was able to follow and copy data in first attempt .thanks

  • @amarnadhgunakala2901
    @amarnadhgunakala2901 4 роки тому +2

    Thanks Adam, I'm waiting like this video on ADF, Please do regularly...

  • @geoj9716
    @geoj9716 3 роки тому +2

    You are a very good teacher.

  • @eatingnetwork6474
    @eatingnetwork6474 4 роки тому +1

    Thanks Adam, amazing workshop, very clear and easy to follow, thanks for helping, i am wiser now :)

  • @pdsqsql1493
    @pdsqsql1493 3 роки тому

    Wow! What Great video, very easy way step by step tutorials and explanations. Well done!

  • @avicool08
    @avicool08 2 роки тому +1

    very simple yet powerful explanation

  • @deoroopnarine6232
    @deoroopnarine6232 4 роки тому

    Your videos are awesome man. Gave me a firm grasp and encouraged me to get an azure subscription and play around some more.

  • @dev.gaunau
    @dev.gaunau 3 роки тому +1

    Thank you so much for sharing these valued knowledge. It's very helpful for me.

  • @wouldyoudomeakindnes
    @wouldyoudomeakindnes 4 роки тому

    thanks a lot for the videos. Is really grateful to see all the dedication and attention to detail from each video; explanation, supporting slides, code and demo really covers the material well.

  • @vicvic553
    @vicvic553 3 роки тому

    Hey, one thing about English - please guys correct me if I am wrong, but I am pretty sure what I am talking about - you shouldn't say inside a sentence "how does it work", but "what it works". Despite that, the content is awesome!

    • @AdamMarczakYT
      @AdamMarczakYT  3 роки тому

      You can if you ask a question. "How does it work" is a question structure, not a statement. it should be "how it works" if I'm stating a fact. You wrote "What it works" but I assume that's a typo. It's one of my common mistakes, my English teacher tries to fix it but it is still a common issue for me ;) Thanks for watching!

  • @eversilver99
    @eversilver99 3 роки тому +1

    Excellent video and knowledge sharing. Great Job!

  • @shivangrana02
    @shivangrana02 4 роки тому +1

    You are really best Adam! Your tutorial helped me a lot. Thanks

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому +1

      Happy to hear that!

    • @shivangrana02
      @shivangrana02 4 роки тому

      @@AdamMarczakYT You are welcome. Please keep up the good work.

  • @rubensanchez6366
    @rubensanchez6366 3 роки тому +1

    Very interesting video Adam. I found quite enlightening your idea of storing metadata. probably it could be maintained separately tracking last record loaded so we could use it as an input for delta loads through queries instead of reloading the full table on each run.

    • @AdamMarczakYT
      @AdamMarczakYT  3 роки тому

      You can either use watermark or change tracking patterns check this out docs.microsoft.com/en-us/azure/data-factory/tutorial-incremental-copy-overview?WT.mc_id=AZ-MVP-5003556

  • @hollmanalu
    @hollmanalu 4 роки тому +1

    Adam, thanks for all your great video's! I appreciate your work very much! Keep up your great work!

  • @AVADHKISHORE
    @AVADHKISHORE 4 роки тому

    Thank you Adam!! These videos are really very helpful and builds the foundation to understand ADF.

  • @ElProgramadorOficial
    @ElProgramadorOficial 3 роки тому +1

    Adam, You are the best!. Thanks man!

  • @agnorpettersen
    @agnorpettersen Рік тому +1

    Very good explanation! I will try to read a list of tables but not export but do mask certain columns. I guess i have to use a derved column inside the for each loop maybe. Three parameters: schema_name, table_name and column_name. But how to make: update . set = sh2() where Key in (select Key from othertable) in a derived column Name context?

    • @agnorpettersen
      @agnorpettersen Рік тому

      I reseached and tried one thing. Seems to work. Have the lookup and foreach like this video and inside the loop i put the function. There I make an dynamic sql statement picking up the variables from the table i have in lookup.

  • @RobHeim
    @RobHeim Рік тому

    Thanks!

  • @veerboot81
    @veerboot81 4 роки тому

    Hi Adam, very nice work this, I made this for a client of mine and found out one important thing: within the For each not all blocks are executed as if they are working together atomically. What I mean is that if you start two thinks in parallel using the For Each block and within the for each block you have two blocks - say block A and B - connected using parameters (item()) within these blocks say X and Y. Block A starting with item X will not necessarily be using the item X in block B although connected!
    So I want to suggest one extra advice to use only one block in a For Each block at max if using parameterazed block within or if you need more than one block start a separate pipeline within the For Each block which will have the multiple blocks. These pipelines will be started as separate childs and to the work in the correct order.
    With kind regards,
    Jeroen

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Hey, not sure I understood what you meant here. Using parameters is not making any connection between the actions.

    • @veerboot81
      @veerboot81 4 роки тому

      @@AdamMarczakYT I'm using a for each loop to load tables with dynamic statements, if I need more than one block (like a logging call to sl server, a copy block to load the data and a logging block after being done with loading these blocks can be in the for each loop itself, but if you start in parallel multiple times different load of tables the blocks will not follow each other sequencially, but will be running through each other, so the logging will not belong to the copy block for example. I will see if I can make an example if I find the time. To solve this I start always another pipeline within the for each and put the blocks in this pipeline. This will create child pipelines in the for each loop ensuring the right order of execution of the blocks (logging start, copy and logging end)

  • @leonkriner3744
    @leonkriner3744 Рік тому

    Amazingly simple and informative!

  • @elisehunter3424
    @elisehunter3424 3 роки тому

    Brilliant tutorial. Easy to follow and it all works like a charm. Thank you!!

  • @SealionPrime
    @SealionPrime 3 роки тому +1

    These tutorials are so useful!

  • @NewMayapur
    @NewMayapur 4 роки тому

    fantastic video Adam!! Really helpful to understand the parametrisation in ADF.

  • @e2ndcomingsoon655
    @e2ndcomingsoon655 3 роки тому

    Thank you! I really appreciate all you share, it truly helps me

  • @Cheyenne9663
    @Cheyenne9663 2 роки тому

    Wow this was explained so well. Thank you!!!

  • @jacobklinck8011
    @jacobklinck8011 4 роки тому +1

    Great session!! Thanks Adam.

  • @nathalielink3869
    @nathalielink3869 3 роки тому +1

    Awesome. Thank you so much Adam!

  • @mateen161
    @mateen161 3 роки тому +1

    Very well explained. Thank you!

  • @TheSQLPro
    @TheSQLPro 3 роки тому +1

    Great content, easy to follow!!

  • @RajivGuptaEverydayLearning
    @RajivGuptaEverydayLearning 4 роки тому +1

    Very nice video with good explanation.

  • @southernfans1499
    @southernfans1499 2 роки тому

    👍👍👍 very good explanation.. 👍👍.

  • @ashokveluguri1910
    @ashokveluguri1910 4 роки тому

    You are awesome Adam. Thank you so much for detailed explanation.

  • @prasadsv3409
    @prasadsv3409 4 роки тому

    Realy great stuff sir.this what am looking in youtube

  • @szymonzabiello2622
    @szymonzabiello2622 4 роки тому +1

    Hey Adam. Great video! Two questions regarding the pipeline itself. 1. How do we approach Source Version Control of the pipeline? In SSIS we could export a package and commit to Git or use TFS. How do we approach versioning in Azure? 2. What is the approach to deploy this pipeline in upper environment? Assuming that this pipeline was created in dev, how do I approach deployment in i.e. UAT?

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому +1

      I think this page describes and answers both of your questions. docs.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment?WT.mc_id=AZ-MVP-5003556 thanks for watching :)

  • @supriyanalage4467
    @supriyanalage4467 4 роки тому +1

    Thanks Adam. I have one query like while we create .csv files , how can we add trailer i.e., footer in the end of the file which will have "count of rows". i.e.., I wanted end statement as TRAILER|20 if row count is 20.

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Based on the video you should be able to do it by modifying the query to include rowcount column for each table. Then use this column to construct file name using concat function.

    • @supriyanalage4467
      @supriyanalage4467 4 роки тому

      @@AdamMarczakYT Thanks for your quick response Adam. But i dont want rowcount in filename. i want it in the footer of the file i.e., inside the file. Can you please let me know if there any way to do the same.? Thankyou in advance!!

  • @GaneshNaik-lv6jh
    @GaneshNaik-lv6jh 10 місяців тому

    Thank You so much.... Very good explanation, Just Awesome

  • @verakso2715
    @verakso2715 3 роки тому +1

    Thanks for your awesome video, it helped me out a great deal

  • @aks541
    @aks541 4 роки тому

    Very well explained & succinct. One request - if possible create a video for loading ADW (Synapse) data-warehouse by ADF

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Thanks! I'm waiting for synapse new workspace experience to be released to make video about it ;)

  • @raghavendrasama
    @raghavendrasama 4 роки тому +1

    How can we handle Failures in Bulk copy? Say we have to load data from 10 files and my pipeline fails after 4 files are loaded. So if we have to restart the loads, do we need to start from the beginning or is there any way we can start from point of failure

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      You can rerun pipeline from last failed activity :)
      azure.microsoft.com/en-us/blog/rerun-activities-inside-your-data-factory-pipelines/

  • @asasdasasdasdasdasdasdasd
    @asasdasasdasdasdasdasdasd 4 роки тому +1

    Hi Adam, how would you add a system/custom column in a bulk copy? For example I want to add a pipeline name, date or the value '1' in a column that is shared on all tables.

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому +1

      There are many ways to do this. Simplest and most similar would be loading data into staging tables and calling stored procedure with merge in it. Then you can apply any additional logic you need.

  • @mahammadshoyab9717
    @mahammadshoyab9717 4 роки тому +1

    Had you made similar dynamic type video for copying data from gen2 to sql?If yes please share link.
    Thanks

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Hi. I don't plan to make video on that because it's too similar to this video. You just need to use Get Metadata instead of Lookup over the blob and then reserve sink with the source. But it's too similar approach to make separate video on that. thanks for watching!

  • @erickcaverolevano4718
    @erickcaverolevano4718 Рік тому

    I did that with ms sql but i have an issue when i add new columns in the source, even when over sink y put a pre query to drop table, the issue is refer to new columns doesn't exist in the sink table
    ,please someone could help me?

  • @madhankumar483
    @madhankumar483 4 роки тому +1

    Instead of lookup can we we use metadata activity to import the schema..??

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Some connectors support pulling structure from the source docs.microsoft.com/en-us/azure/data-factory/control-flow-get-metadata-activity?WT.mc_id=AZ-MVP-5003556#supported-connectors but it would be pretty tricky to transform this to dataset schema. But not that dataset schema is automatically pulled from database if you leave empty schema.

  • @sabareetham.premnath2732
    @sabareetham.premnath2732 2 роки тому

    Is it possible to validate different date format from different source file in copy activity before inserting into one sink table.

  • @topagarwal
    @topagarwal 4 роки тому +1

    Thank you Adam! Any ideas on how we can get table names for Azure table storage?

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      You can use REST API to query for table metadata :) docs.microsoft.com/en-us/rest/api/storageservices/query-tables?WT.mc_id=AZ-MVP-5003556

    • @sunkara2009
      @sunkara2009 4 роки тому

      @@AdamMarczakYT Crystal clear video!!. Thanks Adam. I am trying to copy multiple Azure Table storage. But the lookup activity Query does not seem to support REST APIs to get table list. Any solutions please?

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому +1

      I replied to your second comment, hope it helps :)

  • @feliperegis9989
    @feliperegis9989 4 роки тому +1

    Hey Adam, awesome work and explanation! Do you have another video explaining how to deal with massive data copies from tables in bulk using ADF and that may resolve issues with maximum data or rows of data? Can you make a video with a demo explaining how to deal with this kind of scenarios that you mentioned that's the story for another day? Thanks a lot in advance!! =D

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Thanks. Well, Lookup shouldn't be used for data, but for metadata driven approach, so 5000 rows limit is very good here. It is rare when you will copy over 5000 tables/files with different structure)/etc. If you do you can do different techniques but in those cases I probably would shift approach entirely. Will think about this.

  • @gouravjoshi3050
    @gouravjoshi3050 3 роки тому +1

    Good one adam sir .

  • @khana04
    @khana04 4 роки тому +1

    what if each table have to be copied incrementally every day , so not copy the whole table every day and only new rows , any suggestion for that? each table have column which I can use to identify new records

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      there are many techniques for incremental updates, for example here is one example explained by Microsoft in the docs for ADF docs.microsoft.com/en-us/azure/data-factory/tutorial-incremental-copy-overview?WT.mc_id=AZ-MVP-5003556

  • @sotos47
    @sotos47 3 місяці тому

    Thank you for the content, is there a way to identify which copy activity run for which table, e.g through pipeline output perhaps,
    I dont see that info

  • @naveenshindhe2893
    @naveenshindhe2893 3 роки тому

    Hi, Can we execute insert SQL like below in Lookup activity.. please let me know.
    insert into Table1 (Column1) Select ('test1');
    select x;

  • @bonggamingtube
    @bonggamingtube 4 роки тому +1

    Very informative video. Just need a suggestion, how do I move the csv to SQL server, where my csv file is in a blob of subscription 1 and my SQL server is in subscription 2. i.e. inter subscription movement, using Data Factory

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому +1

      Subscriptions don't matter. Just create appropriate linked services to source/target systems.

    • @bonggamingtube
      @bonggamingtube 4 роки тому

      @@AdamMarczakYT thanks for your comment. Just want to understand, how do i move all the data bricks and data factory and its components from tenat 1 to tenant 2, where both of them are on completely different azure active directory and can't be linked with each other

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      You want to move the data, or Azure resoruces?

    • @bonggamingtube
      @bonggamingtube 4 роки тому

      @@AdamMarczakYT both data and azure resources.. Whatever resources are on tenant 1, lift and shift to tenant2

  • @priyankapatel9461
    @priyankapatel9461 3 роки тому +1

    Could you make another video I want to dynamic fetch the data from 2 excel file in blob and paste to SQLdatabase?

    • @AdamMarczakYT
      @AdamMarczakYT  3 роки тому

      I do actually plan to do a similar video with SharePoint and logic apps and ADF. :)

  • @mpramods
    @mpramods 3 роки тому

    Awesome video Adam.
    I would like to understand the next step on how to loop through the files and load into tables. Do you have a video on that or could you point me to a link with that info?

    • @AdamMarczakYT
      @AdamMarczakYT  3 роки тому

      No video on this, but it's very similar, just use GetMetadata activity instead of the lookup :)

  • @krzysztofrychlik9913
    @krzysztofrychlik9913 4 роки тому +1

    dzieki! bardzo pomocne filmy!

  • @rajesh861000
    @rajesh861000 4 роки тому +1

    @Adam, I have created the pipeline till lookup its working fine after that in foreach loop while debugging each and every time "Client with IP " addresses change. I have already done the setting in the firewall of SQL server i.e Client Add IP but still, IPv4 address dynamically change while running foreach loop and asking me to update "run sp_set_firewall_rule on the master database to create a firewall rule for this IP address or address range". pls let me know how I can resolve this issue

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому +2

      You need to select 'allow access for azure services' in the firewall settings on azure sql server. Client with IP is for your own HOME ip, not data factory public IPs.

    • @yemshivakumar
      @yemshivakumar 4 роки тому

      @@AdamMarczakYT Great help Adam, was struggling every time updating IP. It's resolved with your support. Thanks a ton.

  • @virennegi5322
    @virennegi5322 4 роки тому +1

    Sound interesting but can it done to run indefinitely Example: I would constantly uploading files to Azure Blob Store and Pipeline should pick the new file and just insert them on the Azure SQL or COSMOS DB.

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Loops are not designed for that. Instead make a pipeline for 1 file as input and add blob trigger. This will trigger a new pipeline every time there is a new file.

  • @harishp9984
    @harishp9984 3 роки тому

    Hi Adam, Thanks for the video If my Staging Env is Blob ,If I have 25 tables to load from blob storage to Azure Sql ,what are steps need to be followed ?could you please replay me how to do that ? Thanks .

  • @pavankumars9313
    @pavankumars9313 2 роки тому

    You are very good 👍 explained well thanks 😊

  • @valentinloghin4004
    @valentinloghin4004 4 роки тому +1

    Very nice Adam !!! I didn't checked yet but do you have the process reading from blob storage the csv files and transfered the data to the table ? If you have it can you pls provide the link ?

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Hey, thank you. I didn't make video on reversed scenario as I thought it might be too similar to this one. Just use Get Metadata instead of lookup to get the list of files and upload them to a table. Note that for this to work files or paths should contain the name of the table so you know which file goes to which table. :)

    • @valentinloghin4004
      @valentinloghin4004 4 роки тому

      @@AdamMarczakYT I was able to create the pipeline how read the files from the blob and transfer the data to a sql database , I would like to trigger that pipeline when a file arrive on blob , the event grid is activated i created the trigger but didn't fire the pipeline , any guidance ? Thank you !

  • @shruthil7913
    @shruthil7913 4 роки тому +1

    Hi Adam, there is a way to copy in different tables in SQL. (Example : car csv to cars SQL table and Plane csv to plane csv table automatically, without manual intervention.

    • @AdamMarczakYT
      @AdamMarczakYT  4 роки тому

      Hi there. Well this video is all about it, just reverse the order and use GetMetadata for blob storage action instead of the Lookup :)

  • @sourabhsingh1315
    @sourabhsingh1315 Рік тому

    Hi Adam, how to ingest excel file from container to Synapse table ?

  • @raminroufeh8478
    @raminroufeh8478 3 роки тому

    How can I add Surrogate Keys and Ingested Datetime and copy it to Staging SQL database????

  • @niharhandoo84
    @niharhandoo84 6 місяців тому

    I Adam, thanks ....I have a requirement where I am loading history load of around 2k parquet file each file is around 250 mb using copy data pipeline....loads are taking 6 hrs...can you suggest a more effective way...I am loading from adls to ms fabric lake house. Tables .......appreciate your help...using f64 capacity for now in fabric.

  • @preetijaiswal9089
    @preetijaiswal9089 3 роки тому +1

    Hi Adam! This is really helpful but I'm doing the reverse csv to sql server and ehen i used getMetadata> foreach >>copy activity but when i run it for dynamic migration , all the datatypes of sql server table columns are nvarchar max and i want the source datatypes how can i do that.. Thanks

    • @AdamMarczakYT
      @AdamMarczakYT  3 роки тому +1

      Because ADF treats CSV as if all columns are strings, there is no type detection. You could try using mapping data flows or simply define the SQL table schema before the import.

    • @preetijaiswal9089
      @preetijaiswal9089 3 роки тому

      @@AdamMarczakYT so how to create table schema beforehand and that too dynamically for automation purpose

    • @AdamMarczakYT
      @AdamMarczakYT  3 роки тому +1

      @@preetijaiswal9089 In general you shouldn't define schema dynamically. You should predefine your schema and ETL workflows. But if you really need to do that try a tool like Azure Databricks and just write custom code. It allows type detection too. Mapping Data Flows in Data Factory might help too.

    • @preetijaiswal9089
      @preetijaiswal9089 3 роки тому

      @@AdamMarczakYT Thanks adam i have gone through all of your adf tutorial and they are amazing and so much helpful. Keep up the amazing work so that we can learn more🎉

  • @interestingvideos3894
    @interestingvideos3894 Рік тому

    It seems that when LookUp and Foreach do not work together when LookUp is bonded to one linked Service and Dataset and ForEach to another...(LookUp is on Azure SQL Server and ForEach is on another On Premise Server)