Great video and easy explanation. I hope you come up with a series on step by step on Databricks for beginners like me who are finding to difficult / struggling to make the switch. Thanks for your efforts
Hi Mohit, you can find all resources shared by the speaker here: events.sqlbits.com/2023/agenda You just need to find the session you're looking for and if they have supplied us with their notes etc, you will see it there once you click on it!
I sometimes feel, the good old ETL tools like SSIS , Informatica were easier to deal with ! 😄 (I am a seasoned on premise SQL developer, transitioning into the Azure world slowly).
that was only good if you are working on a legacy data architecture or so called monolithic architecture. With the amount of data generation growing each day, we need SaaS like Databricks and Snowflake to perform all the data activities.
@@SAURABHKUMAR-uk5gg - the whole idea of service principal Id, key , storing them in a vault , was total rubbish architecture.. now they are slowly moving towards Managed Identity... Azure is totally not worth it !
How to intially get started with databricks like creating clusters, data, notebook, how to set up the infrastructure. I am not able to move forward because of that! Please help
Is there a video on how data is pulled from the original source, like a remote SQL/noSQL server, or some API? I wonder how data is getting to the data lake? I assume this first extraction should be a bronze layer.
I have a general doubt in autoloader . does autoloader required to run in a job or notebook triggering manually .Or no need to touch anything once we written the code as when as the file arrives it will run automatically and processed the files.
Trigger your notebook that contains your DLT + Auto Loader code with Databricks Workflows. You can trigger it using a schedule, a file arrival, or choose to run the job continuously. It doesn't matter how you trigger the job. Auto Loader will only process each file once.
My understanding (as an "also figuring out data bricks" newb: * View: Because the difference between bronze and silver in this instance is very small (no granularity changes, no joins, no heavy calculations, just one validation constraint), it doesn't really make sense to make another copy of the table when the view would be just as performant in this case. * "Live" view: I think maybe this is required because the pipeline needs it to be a live view to properly calculate pipeline dependencies Hopefully that understanding is correct, or others will correct me :) My follow up question would be: As I think about that validation constraint, it really seems like in this case it seems functionally identical to just applying a filter on the view. Is that correct? If so, is the reason to use the validation constraint rather than a filter, mostly to keep code consistency between live tables and live views?
Hm … Where in these pipelines you have specified that nature of the created/maintained entity - bronze, silver or gold other then the name of the object itself. Also where these LIVE tables are exactly stored - from your demonstration it appear they all live in the same schema / database while in real live the bronze, silver and gold entities have designated catalogs and schemas.
As usual! Mahit wonderful!
Great video and easy explanation. I hope you come up with a series on step by step on Databricks for beginners like me who are finding to difficult / struggling to make the switch. Thanks for your efforts
Awesome thanks so much . this is really useful for me as a Data Architect . much is expected from us with all the varying technology
Great video ! Can you share the github location of the files used ?
Nice video🤩
🥳
Great presentation.
Thanks for watching :)
Absolutely nailed it !!!
Well explained with so much clarity. Thanks 😊
Our pleasure 😊
@@SQLBits Can you provide code. Thanks in advance..
Wonderful session. Sensible questions asked. Cool
Nice tutorial. Thanks for sharing. 👍
crystal clear explanation thank you so much can you provide that notebook ?
@30:03, if you're defining the schema while creating the table, then why again selecting map(inferschema = True) ?
good observation. . I also had this doubt
Awesome training, can you please share the data file, i want to try it.
Learned a Lot from this . Thank You for this video !
Glad it was helpful!
Great video, thanks.
Great presentation. No example code. What's zero times zero?
Awesome 🙏😍
Excellent
Can you please provide us the notebook DBC file or ipynb..
By the way great session,
Thanks
Hi Mohit, you can find all resources shared by the speaker here: events.sqlbits.com/2023/agenda
You just need to find the session you're looking for and if they have supplied us with their notes etc, you will see it there once you click on it!
@@SQLBits thanks so much
Thanks SQLBits. Question: Can we create a "View" on Gold Layer instead having "Live Table" ?
Wow super stuff thank you sir
Glad you liked it!
I sometimes feel, the good old ETL tools like SSIS , Informatica were easier to deal with ! 😄
(I am a seasoned on premise SQL developer, transitioning into the Azure world slowly).
that was only good if you are working on a legacy data architecture or so called monolithic architecture. With the amount of data generation growing each day, we need SaaS like Databricks and Snowflake to perform all the data activities.
@@SAURABHKUMAR-uk5gg - the whole idea of service principal Id, key , storing them in a vault , was total rubbish architecture.. now they are slowly moving towards Managed Identity... Azure is totally not worth it !
Great stuff, thank you!
For complex rule based transformations how we can leverage it?
Does delta live tables in all the layers has filesystem linked to it as like in hive or Databricks ?
Can we create dependency between two notebooks?
Is there any way to load new files sequentially if bunch of files arrived at a time?
How to intially get started with databricks like creating clusters, data, notebook, how to set up the infrastructure. I am not able to move forward because of that! Please help
I guess it helps
ua-cam.com/video/EyJgykIcy_I/v-deo.html
how is this different than using dataframes in pyspark?
shoud have showed us how to trobleshoot or debug
Is there a video on how data is pulled from the original source, like a remote SQL/noSQL server, or some API?
I wonder how data is getting to the data lake?
I assume this first extraction should be a bronze layer.
I have a general doubt in autoloader . does autoloader required to run in a job or notebook triggering manually .Or no need to touch anything once we written the code as when as the file arrives it will run automatically and processed the files.
Trigger your notebook that contains your DLT + Auto Loader code with Databricks Workflows. You can trigger it using a schedule, a file arrival, or choose to run the job continuously. It doesn't matter how you trigger the job. Auto Loader will only process each file once.
pls provide code links
can you please provide code for this?
I don't get the usage of VIEWS between Bronze and Silver tables.
Anyone?
Hi Shzyincu, you can get in touch with the speakers who taught this video via LinkedIn and Twitter if you have any questions!
My understanding (as an "also figuring out data bricks" newb:
* View: Because the difference between bronze and silver in this instance is very small (no granularity changes, no joins, no heavy calculations, just one validation constraint), it doesn't really make sense to make another copy of the table when the view would be just as performant in this case.
* "Live" view: I think maybe this is required because the pipeline needs it to be a live view to properly calculate pipeline dependencies
Hopefully that understanding is correct, or others will correct me :)
My follow up question would be: As I think about that validation constraint, it really seems like in this case it seems functionally identical to just applying a filter on the view. Is that correct? If so, is the reason to use the validation constraint rather than a filter, mostly to keep code consistency between live tables and live views?
You don't materialize as new tables evertime, We sometimes materialize it as views. So minor transformations like changing the type of the field etc.
Hm … Where in these pipelines you have specified that nature of the created/maintained entity - bronze, silver or gold other then the name of the object itself. Also where these LIVE tables are exactly stored - from your demonstration it appear they all live in the same schema / database while in real live the bronze, silver and gold entities have designated catalogs and schemas.
Databricks have pathetic UI...
Very well explained, Thanks