9. How to do full load from On Premise SQL Server till ADLS using Azure Synapse Pipelines

Поділитися
Вставка
  • Опубліковано 25 сер 2024
  • This video tells how to do full load from On Premise SQL Server till ADLS in a Parquet format using Azure Synapse Pipelines.
    We used Azure SQL DB to store the metadata details. We loaded data from On premise SQL server to ADLS layer.
    To prepare the metadata we have used this query :
    select a.TABLE_SCHEMA as Table_Schema, a.TABLE_NAME as Table_Name ,a.TABLE_CATALOG,
    'select '+ string_agg(b.column_name,',') + ' from ' + a.TABLE_SCHEMA+'.'+a.TABLE_NAME
    as Query, 'Y' as IncrementalFlag, 1 as Isactive
    from INFORMATION_SCHEMA.tables a
    inner join INFORMATION_SCHEMA.COLUMNS b
    on a.TABLE_SCHEMA=b.TABLE_SCHEMA and a.TABLE_NAME=b.TABLE_NAME
    where TABLE_TYPE='BASE TABLE'
    group by a.TABLE_SCHEMA,a.TABLE_NAME,a.TABLE_CATALOG
    #azuredatafactory
    #azuresynapseanalytics
    #datafactory
    #azure

КОМЕНТАРІ • 41

  • @ravitejak3597
    @ravitejak3597 6 місяців тому +1

    Amazing work, love how you explained each step-in detail.

  • @raghum4938
    @raghum4938 Рік тому +1

    Fully impressed with the flow of explation and content on the subject.. Well done Annu!! thanks for the video!

  • @MadhurAcousticGuitarist
    @MadhurAcousticGuitarist 2 роки тому +1

    Please keep creating videos like this, don't stop here only!! Request!!
    As it help for the beginners a lot. :)

  • @user-kakatya
    @user-kakatya 11 місяців тому +1

    I am so happy I found your channel! Thank you very much, very well explained and only the necessary information!

  • @vildotube
    @vildotube 2 роки тому +1

    Amazing concept and perfect video. I'm eagerly waiting for the incremental load part.

  • @agnesxavier290
    @agnesxavier290 2 роки тому +1

    Thanks for the effort! ive learnt a good deal. Eagerly expecting incremental load. Since im a newbie in Azure, this helps a lot in understanding the mistakes I made with my pipeline.

    • @azurecontentannu6399
      @azurecontentannu6399  2 роки тому

      Thanks for watching 😊 will surely create video on incremental load soon. Stay tuned.

    • @lopu8000
      @lopu8000 2 роки тому +1

      @@azurecontentannu6399 please post more videos

  • @sudippandit7051
    @sudippandit7051 Рік тому +1

    super explanation Annu!! Good Job!🙂

  • @sourav7413
    @sourav7413 Рік тому +1

    love this video ...really informative...

  • @jackvieiraoficial
    @jackvieiraoficial 6 місяців тому +1

    thank you very much, excelent work!

  • @rajbavisi3284
    @rajbavisi3284 Рік тому +2

    @Azure Content: Annu Hello Annu, very nice explanation, thanks for your all efforts, I think you didn't show that now how you can move data from ADLS Parquet file to Dedicated SQL POOL, could you please make a video on this as well that how can we move data from ADLS to Dedicated Azure SQL

  • @user-zb9kn9pn9d
    @user-zb9kn9pn9d 9 місяців тому +1

    Hi Annu.. Can you please make a video on how we can connect to on premise MS Sql server from Azure synpase notebooks?? I want to query on-premise tables from notebook.. is this possible?

  • @user-kw7tq8ig9j
    @user-kw7tq8ig9j Рік тому +1

    Can you please give the incremental video link. Thankyou so much. your video is very informative.

  • @abdulkhan1310
    @abdulkhan1310 Рік тому +1

    Is this the preferred method when moving 3TB Database from On Prem to Synapse Dedicated Pool? Thank you very much for this informative video.

  • @zahidzafer4704
    @zahidzafer4704 Рік тому +1

    It's very good video for full load. Just one query when we perform the truncate operation after first load

    • @azurecontentannu6399
      @azurecontentannu6399  Рік тому

      Hi Zahid.. Thankyou for watching the video. In this video, we just loaded the data till ADLS. In the next video (part 10) we are loading the same till dedicated sql pool. We are having truncate table statement there before the full load. Check part 10 video after 23rd minute

  • @Kiran-eb3rm
    @Kiran-eb3rm 9 місяців тому

    Hey,
    can't we use notebooks to read the on premise sql tables into dataframes.

  • @DeepDeep-zd5jq
    @DeepDeep-zd5jq 2 роки тому +1

    Nice video .Can we load data from azure synapse to on premise SQL.

  • @harikrishnarevuri5480
    @harikrishnarevuri5480 Рік тому +1

    Hi Annu.. thanks for sharing your knowledge. instute of querying from source side.. can't we do log based like using transaction logs

    • @azurecontentannu6399
      @azurecontentannu6399  Рік тому

      Hi hari.. Sorry i didn't get ur question. The metadata query?

    • @harikrishnarevuri5480
      @harikrishnarevuri5480 Рік тому

      @@azurecontentannu6399 Log based CDC. When a new transaction comes into a database, it gets logged into a log file with no impact on the source system. And you can pick up those changes and then move those changes from the log.

  • @sreenivasacharytatikonda2189
    @sreenivasacharytatikonda2189 2 роки тому +1

    Excellent video. Nice explanation. I have one doubt. Can you please clarify. If we move on premise sql dw database into Azure Synapse for multiple users. Assume that each user has one dw database, so we have 8 dw databases in on premises. how can we move all 8 dw databases data into Azure Synapse? can we differentiate each user data with different schema?. Please guide me. Thanks for the support.

    • @azurecontentannu6399
      @azurecontentannu6399  2 роки тому +1

      Hello Sreenivasa.
      Thanks for watching the video. Yes you can create different schema corresponding to each databases from the source. No need to create multiple databases , distinguishing based on schema will do the job.
      Also, another approach you can rely on is to create an additional column for all the tables that will hold the source dbname, this way you can merge tables(having same name and table definition) from different databases from source into single table in destination with a column holding source dbname.
      Hope it helps

    • @sreenivasacharytatikonda2189
      @sreenivasacharytatikonda2189 2 роки тому +1

      @@azurecontentannu6399 Thanks for the reply madam. So we are separating user with different schema. I think we need to create those new schemas in Azure synapse before loading the user data..am i right?

    • @azurecontentannu6399
      @azurecontentannu6399  2 роки тому +1

      @@sreenivasacharytatikonda2189
      If you select auto create table , schema and table both will be automatically getting created for you during the runtime

    • @sreenivasacharytatikonda2189
      @sreenivasacharytatikonda2189 2 роки тому

      @@azurecontentannu6399 Thank you Madam :)

  • @srinathyellasiri8422
    @srinathyellasiri8422 Рік тому +1

    Hi madam
    Many interviewer
    Asking how to improve your pipeline speed while you moving data from onpremise SQL to Adls can you please

  • @sriramch3128
    @sriramch3128 2 роки тому +1

    Good job, but can u explain this by taking small data

    • @azurecontentannu6399
      @azurecontentannu6399  2 роки тому

      I have taken small set of data . Only 3 tables with around 20 records each

  • @srinathyellasiri8422
    @srinathyellasiri8422 Рік тому

    Hi madam
    Please do incremental load as the same source and sink

    • @azurecontentannu6399
      @azurecontentannu6399  Рік тому

      Hi sir
      Please check part 11

    • @prabhatgupta6415
      @prabhatgupta6415 9 місяців тому

      is it a ideal way to load data incrementally using ADF or do we go for different approach in data bricks. I dont know coz not much content are there on utube. Tell me the preffered way@@azurecontentannu6399

  • @galivenkataramana4142
    @galivenkataramana4142 2 роки тому

    How to do a metadata-driven table? Using pipeline dynamically.