A one stop solution to understand basics of ETL in Databricks. Thanks Mr. Raja for such amazing tutorials on your channel. We really benefit from them. Many thanks again.
Thanks Gopi for sharing your experience. It is really amazing to know that you got placed in MNCs. All the very best! If you find this channel helpful, just spread across data engineering communities so that people can be benefited
Beautiful explanation and a very good example of ETL. Thanks a lot for this video. It helped a lot in gaining a clear picture of the ETL process in Databricks.
I read data from a huge table in Azure SQL DB and wrote it to ADLS. It created one file of 900 MB instead of partitions. Is there any parameter we can change to create the partitions?
Nice explanations and this series is really awesome. please create more videos on databricks while solving some real time ingest/export requirements using pyspark.
In Azure SQL db itself we can do the null handling ,join and duplicates deletion.why we are using data frame is there any reason specific to that.Thanks in advance.
The requirement is we need to move the data from azure SQL to adls after performing these transformations. Could you please explain how you will do these transformations in azure SQL itself as part of this requirement?
@@rajasdataengineering7585 yes sir.i got it.i thought of using isnull operation on product ,delete the duplicates in fact and join the 2 tables in Azure SQL db itself but that will take too much time.Thanks.
@@rajasdataengineering7585 Hi Raja , Could you please share us few requirements like this for us think logically , So that we can have some clear idea how we will get in our projects.
We can create tables using create table statement. Otherwise you can use readymade adventure works database by choosing it in additional settings while creating the database
No words to express my feelings.How great this tutorial sir.Thanks for this video👍 and also could you make any video how to clear left space and characters in string while cleansing data?
@@rajasdataengineering7585 I am struggling to find a resource for implementing it, do let me know if you know of any resources for guidance and help on this topic apart from ms learn site
Hi Sir, It is a great series and well structured ones with regular topics and interview questions. Can you also share the notebooks for reference and practice. Thanks a lot in advance.
Hi, I have already created one video on end to end project using multiple components. Pls refer the video number 87 ua-cam.com/video/dxxXWe4gNTo/v-deo.html
Not really. Azure has many services within data engineering like adf, synapse you analytics, databricks etc. Pyspak is mainly needed for databricks developer and spark pool inside synapse. If your project is on adf, Pyspark does not play any role
Thanks for great explanation 🙏🙏... I couldn't able to see the code clearly so if you don't mind can you pls share the code. We can also try once by following your videos 🙏🙏
Hi Anil, yes that is possible. I have already explained that concept in key vault integration video. Please go through once ua-cam.com/video/c2EmTS_s5zw/v-deo.html
The intermediate steps are already explained in videos 17 and 18. Pls watch them as a prerequisite to this video ua-cam.com/video/bZzh7kfBcx4/v-deo.html ua-cam.com/video/xxN88Ca4ues/v-deo.html
A one stop solution to understand basics of ETL in Databricks. Thanks Mr. Raja for such amazing tutorials on your channel. We really benefit from them. Many thanks again.
Thank you Maheshwar!
Thank you so much sir actually I am having support kind of experience in Azure I just followed all of your videos now I got placed in 2 mnc's
Thanks Gopi for sharing your experience.
It is really amazing to know that you got placed in MNCs. All the very best!
If you find this channel helpful, just spread across data engineering communities so that people can be benefited
Beautiful explanation and a very good example of ETL. Thanks a lot for this video. It helped a lot in gaining a clear picture of the ETL process in Databricks.
Thank you
Best video series.
Thank you
Great Explanation and easily understandble👏👏
Glad you liked it! Thanks for your comment
I read data from a huge table in Azure SQL DB and wrote it to ADLS. It created one file of 900 MB instead of partitions. Is there any parameter we can change to create the partitions?
Nice explanations and this series is really awesome. please create more videos on databricks while solving some real time ingest/export requirements using pyspark.
sure, will do more videos on this series
Great video with nice and clear instructions.. keep it up. Thanks.
Glad it was helpful! keep watching
Wow very nicely explained. Thanks a lot for your efforts.
You are most welcome
It's really very helpful. Please make a video on end to end project with ADF and ADB. Thank you for giving wonderful videos.
Sure, will do
Instead of ADLS... can we put that data in synapse dedicated sql pool
Yes we can load into azure synapse and azure SQL also. Please watch the video no 87 in this channel
@@rajasdataengineering7585 ok thanks
In Azure SQL db itself we can do the null handling ,join and duplicates deletion.why we are using data frame is there any reason specific to that.Thanks in advance.
The requirement is we need to move the data from azure SQL to adls after performing these transformations.
Could you please explain how you will do these transformations in azure SQL itself as part of this requirement?
@@rajasdataengineering7585 yes sir.i got it.i thought of using isnull operation on product ,delete the duplicates in fact and join the 2 tables in Azure SQL db itself but that will take too much time.Thanks.
@@rajasdataengineering7585 Hi Raja , Could you please share us few requirements like this for us think logically , So that we can have some clear idea how we will get in our projects.
Sure will do
@@rajasdataengineering7585 You can just share it here or either comment outside and Pin it .
Do you have your codes posted somewhere? It is very important for us to follow along
Getting error driver not found @ send step. Please help how to solve this?
Where to find dataset for this table ??
how did you get tables in azure sql database
We can create tables using create table statement. Otherwise you can use readymade adventure works database by choosing it in additional settings while creating the database
it would be really amazing if the links to topics - mentioned to refer are added in description. as you are an amazing tutor
What if multiple tables(more than 10) needs to copy from azure sql db to data lake
We can create multiple dataframes reading multiple tables and load them into adls
@@rajasdataengineering7585 yes we but can it be parametarized if yes then how?
Yes parameters can be setup using widgets
@@rajasdataengineering7585 okay and BTW your video's are very informative .......keep doing such great videos.
Hi Raja , You have explained this in detailed . Thanks for that ,But can you please provide the data set ?? To do hands on activity ??
Hi Raja...would u plz tell that why do you take left outer join not inner join
Since this is onlya sample logic he is trying to do, I think for demonstration purpose does'nt matter which join he uses.
No words to express my feelings.How great this tutorial sir.Thanks for this video👍 and also could you make any video how to clear left space and characters in string while cleansing data?
Thank you.
Sure will post a video on your requirement
Can u have all these commands in ur github repo, can u give repo link to practice....
Great tutorial but i have a question. Can we connect oracle database to databricks?
Yes we can connect oracle database too
Great work!!!!!!!!!
Thank you! Cheers!
How can i reach you ?
Thanks for sharing this info
My pleasure! Welcome
Grt thanks for sharing the video
Glad it helps!
This is very informative video. Do you also have a video on connecting to SQL DB via Managed identity?
Thank you for your comment! No I haven't a video on managed identity
@@rajasdataengineering7585 I am struggling to find a resource for implementing it, do let me know if you know of any resources for guidance and help on this topic apart from ms learn site
read from azure sql db and write it again in azure sql db please make a video on it
Sure, will create a video on this requirement
Hi Sir, It is a great series and well structured ones with regular topics and interview questions. Can you also share the notebooks for reference and practice. Thanks a lot in advance.
Superb bro 👌 👏
Can you please provide us the some big data end to end projects involving all components
Hi, I have already created one video on end to end project using multiple components.
Pls refer the video number 87
ua-cam.com/video/dxxXWe4gNTo/v-deo.html
When we
learn azure And also required to learn pyspark?
Not really. Azure has many services within data engineering like adf, synapse you analytics, databricks etc. Pyspak is mainly needed for databricks developer and spark pool inside synapse.
If your project is on adf, Pyspark does not play any role
Thanks.
A nice video. Can you create another video to automate this pipeline using Airflow?
Sure Paarthi, will create a video on this requirement
Thank you for the content
Thanks Prathap! Glad it helps
I want the dataset that you used..how to get
Hi Shalini, I have used the sample database adventure works in this exercise. It is open source dataset
While creating azure sql db you'll see the option to have sample tables/databases. After creation, you'll see few tables will be present by default
@@kartikeshsaurkar4353 is that table has any data in it by default have u checked it ?
Yes Ajay, it would have sample data as well to practice
@@rajasdataengineering7585 Thanks Raja !!! For your reply , Your videos are really helpful to prepare Azure Data Engineer.
Hi Sir, i am getting below error when i try to connect jdbc
java.sql.SQLException: No suitable driver
pls help on this
.option("driver", jdbcDriver)
add this code as well in the second step to resolve your issue
It will be very helpful if u can share the notebook also in HTML format
Do you have a class where you can train me. thanks for your video.
How to apply multiple rules in a single statement like SUM("UnitPrice"), SUM("TOTALLINE), AVG(PRICE) ETC
Best example
Thanks
Nice anna
Hope it helps!
Thanks for great explanation 🙏🙏... I couldn't able to see the code clearly so if you don't mind can you pls share the code. We can also try once by following your videos 🙏🙏
Explained very well 👌
Can someone help on how to setup jdbc without showing password in the code?
Hi Anil, yes that is possible.
I have already explained that concept in key vault integration video. Please go through once
ua-cam.com/video/c2EmTS_s5zw/v-deo.html
@raja, nice explanation.. can you pls share notebook ?
Can you help me on one small assignment sir please??
👍👍
It's a nice use case for batch processing but you shouldn't call it Real Time ETL.
Real time does not mean streaming data here. It means one of the real time use case for ETL requirement.
How to handle bad records in Azure databricks
Hi Sravan, I have already posted a video on how to handle bad records. You can refer that
ua-cam.com/video/w_AWaXnaI94/v-deo.html
@@rajasdataengineering7585 thanks
No intermediate steps explained, a beginner will find difficulty following! Please make it beginner friendly. take it as feedback
The intermediate steps are already explained in videos 17 and 18. Pls watch them as a prerequisite to this video
ua-cam.com/video/bZzh7kfBcx4/v-deo.html
ua-cam.com/video/xxN88Ca4ues/v-deo.html
Most companies are asking that's why