Delta Live Tables: Building Reliable ETL Pipelines with Azure Databricks

Поділитися
Вставка
  • Опубліковано 16 січ 2025

КОМЕНТАРІ • 55

  • @amadoumaliki
    @amadoumaliki 11 місяців тому +2

    As usual! Mahit wonderful!

  • @supriyasharma9517
    @supriyasharma9517 5 місяців тому +1

    Great video and easy explanation. I hope you come up with a series on step by step on Databricks for beginners like me who are finding to difficult / struggling to make the switch. Thanks for your efforts

  • @samanthamccarthy9765
    @samanthamccarthy9765 11 місяців тому +1

    Awesome thanks so much . this is really useful for me as a Data Architect . much is expected from us with all the varying technology

  • @priyankpant2262
    @priyankpant2262 9 місяців тому +1

    Great video ! Can you share the github location of the files used ?

  • @Databricks
    @Databricks Рік тому +3

    Nice video🤩

  • @MichaelEFerry
    @MichaelEFerry Рік тому +2

    Great presentation.

    • @SQLBits
      @SQLBits  Рік тому

      Thanks for watching :)

  • @PravinUser
    @PravinUser 2 місяці тому

    Absolutely nailed it !!!

  • @ananyanayak7509
    @ananyanayak7509 Рік тому +2

    Well explained with so much clarity. Thanks 😊

    • @SQLBits
      @SQLBits  Рік тому

      Our pleasure 😊

    • @ADFTrainer
      @ADFTrainer Рік тому +1

      @@SQLBits Can you provide code. Thanks in advance..

  • @Rangapetluri
    @Rangapetluri 7 місяців тому

    Wonderful session. Sensible questions asked. Cool

  • @menezesnatalia
    @menezesnatalia Рік тому +2

    Nice tutorial. Thanks for sharing. 👍

  • @pankajjagdale2005
    @pankajjagdale2005 Рік тому +2

    crystal clear explanation thank you so much can you provide that notebook ?

  • @SAURABHKUMAR-uk5gg
    @SAURABHKUMAR-uk5gg 4 місяці тому +2

    @30:03, if you're defining the schema while creating the table, then why again selecting map(inferschema = True) ?

    • @Rafian1924
      @Rafian1924 Місяць тому

      good observation. . I also had this doubt

  • @anantababa
    @anantababa 10 місяців тому +1

    Awesome training, can you please share the data file, i want to try it.

  • @starmscloud
    @starmscloud Рік тому +1

    Learned a Lot from this . Thank You for this video !

    • @SQLBits
      @SQLBits  Рік тому

      Glad it was helpful!

  • @germanareta7267
    @germanareta7267 Рік тому +3

    Great video, thanks.

  • @trgalan6685
    @trgalan6685 Рік тому +1

    Great presentation. No example code. What's zero times zero?

  • @Rafian1924
    @Rafian1924 Місяць тому

    Awesome 🙏😍

  • @benim1917
    @benim1917 4 місяці тому

    Excellent

  • @MohitSharma-vt8li
    @MohitSharma-vt8li Рік тому +2

    Can you please provide us the notebook DBC file or ipynb..
    By the way great session,
    Thanks

    • @SQLBits
      @SQLBits  Рік тому

      Hi Mohit, you can find all resources shared by the speaker here: events.sqlbits.com/2023/agenda
      You just need to find the session you're looking for and if they have supplied us with their notes etc, you will see it there once you click on it!

    • @MohitSharma-vt8li
      @MohitSharma-vt8li Рік тому

      @@SQLBits thanks so much

  • @prashanthmally5765
    @prashanthmally5765 8 місяців тому

    Thanks SQLBits. Question: Can we create a "View" on Gold Layer instead having "Live Table" ?

  • @srinubathina7191
    @srinubathina7191 Рік тому +1

    Wow super stuff thank you sir

    • @SQLBits
      @SQLBits  Рік тому

      Glad you liked it!

  • @artus198
    @artus198 Рік тому +6

    I sometimes feel, the good old ETL tools like SSIS , Informatica were easier to deal with ! 😄
    (I am a seasoned on premise SQL developer, transitioning into the Azure world slowly).

    • @SAURABHKUMAR-uk5gg
      @SAURABHKUMAR-uk5gg 4 місяці тому

      that was only good if you are working on a legacy data architecture or so called monolithic architecture. With the amount of data generation growing each day, we need SaaS like Databricks and Snowflake to perform all the data activities.

    • @artus198
      @artus198 4 місяці тому

      @@SAURABHKUMAR-uk5gg - the whole idea of service principal Id, key , storing them in a vault , was total rubbish architecture.. now they are slowly moving towards Managed Identity... Azure is totally not worth it !

  • @walter_ullon
    @walter_ullon 8 місяців тому

    Great stuff, thank you!

  • @Chandurkar-i2y
    @Chandurkar-i2y Рік тому

    For complex rule based transformations how we can leverage it?

  • @ashwenkumar
    @ashwenkumar 11 місяців тому

    Does delta live tables in all the layers has filesystem linked to it as like in hive or Databricks ?

  • @lostfrequency89
    @lostfrequency89 10 місяців тому

    Can we create dependency between two notebooks?

  • @Chandurkar-i2y
    @Chandurkar-i2y Рік тому

    Is there any way to load new files sequentially if bunch of files arrived at a time?

  • @saikeerthanakattige617
    @saikeerthanakattige617 4 місяці тому

    How to intially get started with databricks like creating clusters, data, notebook, how to set up the infrastructure. I am not able to move forward because of that! Please help

    • @Vishnu-Kanth-01
      @Vishnu-Kanth-01 4 місяці тому

      I guess it helps
      ua-cam.com/video/EyJgykIcy_I/v-deo.html

  • @M0RZ3N
    @M0RZ3N 2 місяці тому

    how is this different than using dataframes in pyspark?

  • @guddu11000
    @guddu11000 11 місяців тому

    shoud have showed us how to trobleshoot or debug

  • @olegkazanskyi9752
    @olegkazanskyi9752 Рік тому

    Is there a video on how data is pulled from the original source, like a remote SQL/noSQL server, or some API?
    I wonder how data is getting to the data lake?
    I assume this first extraction should be a bronze layer.

  • @thinkbeyond18
    @thinkbeyond18 Рік тому

    I have a general doubt in autoloader . does autoloader required to run in a job or notebook triggering manually .Or no need to touch anything once we written the code as when as the file arrives it will run automatically and processed the files.

    • @Databricks
      @Databricks Рік тому +1

      Trigger your notebook that contains your DLT + Auto Loader code with Databricks Workflows. You can trigger it using a schedule, a file arrival, or choose to run the job continuously. It doesn't matter how you trigger the job. Auto Loader will only process each file once.

  • @ADFTrainer
    @ADFTrainer Рік тому +1

    pls provide code links

  • @supriyasharma9517
    @supriyasharma9517 5 місяців тому

    can you please provide code for this?

  • @TheDataArchitect
    @TheDataArchitect Рік тому

    I don't get the usage of VIEWS between Bronze and Silver tables.

    • @TheDataArchitect
      @TheDataArchitect Рік тому

      Anyone?

    • @SQLBits
      @SQLBits  Рік тому

      Hi Shzyincu, you can get in touch with the speakers who taught this video via LinkedIn and Twitter if you have any questions!

    • @richardslaughter4245
      @richardslaughter4245 10 місяців тому +1

      My understanding (as an "also figuring out data bricks" newb:
      * View: Because the difference between bronze and silver in this instance is very small (no granularity changes, no joins, no heavy calculations, just one validation constraint), it doesn't really make sense to make another copy of the table when the view would be just as performant in this case.
      * "Live" view: I think maybe this is required because the pipeline needs it to be a live view to properly calculate pipeline dependencies
      Hopefully that understanding is correct, or others will correct me :)
      My follow up question would be: As I think about that validation constraint, it really seems like in this case it seems functionally identical to just applying a filter on the view. Is that correct? If so, is the reason to use the validation constraint rather than a filter, mostly to keep code consistency between live tables and live views?

    • @anilkumarm2943
      @anilkumarm2943 6 місяців тому

      You don't materialize as new tables evertime, We sometimes materialize it as views. So minor transformations like changing the type of the field etc.

  • @tratkotratkov126
    @tratkotratkov126 9 місяців тому

    Hm … Where in these pipelines you have specified that nature of the created/maintained entity - bronze, silver or gold other then the name of the object itself. Also where these LIVE tables are exactly stored - from your demonstration it appear they all live in the same schema / database while in real live the bronze, silver and gold entities have designated catalogs and schemas.

  • @Ptelearn4free
    @Ptelearn4free 8 місяців тому

    Databricks have pathetic UI...

  • @freetrainingvideos
    @freetrainingvideos 6 місяців тому

    Very well explained, Thanks