Yet another excellent video. I’ve been wondering what the ”scoped credential” actually mean and what it does. This video has helped me understand it much better. Thank you.
It can - take a look at docs: learn.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables?tabs=hadoop#create-and-query-external-tables-from-a-file-in-azure-data-lake
Hi Tybul , Thanks for great content. I have a question . why have you used ADF piplelines instead of synapse pipelines. Do we have any advantage of using ADF here or any constraint if we create synapse pipelines?
Hi, there is no advantage of using ADF instead of Synapse. Actually, it's the opposite due to tight integration of components withing Synapse (pipelines and SQL Pools). Simply I used ADF pipelines as I was using them during the ingestion phase.
Thank you very much Sir, thoroughly enjoyed and what a learning, Appreciate the explanation.. Just today finished entire playlist and waiting for next installment. :) question: How Synapse dedicated SQL Pool is different than Databrick Warehouse?
Dedicated SQL Pool is an example of the traditional data warehousing that fits into modern data warehouse architecture. On the other hand, Databricks SQL Warehouse fits into the lakehouse architecture. I plan to cover Databricks SQL Warehouse once I complete DP-203 and then I'll probably compare those two services.
@@torpatty You just have to click the Join button (or use this direct link: ua-cam.com/channels/LnXq-Fr-6rAsCitq9nYiGg.htmljoin) and then select the membership level. Currently there are two levels available: "Junior Data Engineer" and "Data Engineer".
Hello Piotr! I have been following your DP 203 episodes from a month. The use cases taught are really informative and helps us learn from your vast experience in Data Engineering Field. I have a request for you - Since 2024 Paris Olympics are right around the corner, i am thinking of a Data Streaming Project ,in which we can ingest Olympics data from APIs and transform in Azure cloud and save back to Data Lake. This project would help us in implementation of topics learnt in your episodes. Please guide me by providing the high level architecture of how this project can be implemented. Thanks in Advance :)
Just curious...In the Spark Pools section, we actually copied the Minifigs table from the dedicated SQL pool itself and wrote it back to the same table in dedicated SQL pool but in the diagrammatic representation you have shown that the data is read from data lake and loaded to dedicated SQL pool...am I missing something? Thanks
Hi Piotr,i just saw a comment you mention you still need 9-10 videos to complete dp-203 course could you please name the topics that are pending so that i can search online before my exam in case you not uploaded that videos i try to learn online
Sure, I plan to record the following episodes: 1. Additional features in Dedicated SQL Pool. 2. Security in Dedicated SQL Pool. 3. Synapse Serverless SQL Pool. 4. 3rd milestone - serving data. 5. Introduction to streaming. 6. Azure Stream Analytics + Event Hubs/IoT Hub. 7. Microsoft Purview. 8. 4th milestone - streaming & governance. 9. Exam overview & course wrap up.
watching all episodes! thank you!
Tyvul....your Videos are Gild ..for Anyone who is preparing for Clearing Dp203....i am enjoying it
Glad you enjoy it!
Yet another excellent video. I’ve been wondering what the ”scoped credential” actually mean and what it does. This video has helped me understand it much better. Thank you.
Glad it was helpful!
Great!
25:27 can't synapse infer the existing schema from parquet file?
It can - take a look at docs: learn.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables?tabs=hadoop#create-and-query-external-tables-from-a-file-in-azure-data-lake
Hi Tybul , Thanks for great content. I have a question . why have you used ADF piplelines instead of synapse pipelines. Do we have any advantage of using ADF here or any constraint if we create synapse pipelines?
Hi, there is no advantage of using ADF instead of Synapse. Actually, it's the opposite due to tight integration of components withing Synapse (pipelines and SQL Pools).
Simply I used ADF pipelines as I was using them during the ingestion phase.
@@TybulOnAzure Thanks Tybul
Thank you very much Sir, thoroughly enjoyed and what a learning, Appreciate the explanation.. Just today finished entire playlist and waiting for next installment. :) question: How Synapse dedicated SQL Pool is different than Databrick Warehouse?
Dedicated SQL Pool is an example of the traditional data warehousing that fits into modern data warehouse architecture. On the other hand, Databricks SQL Warehouse fits into the lakehouse architecture.
I plan to cover Databricks SQL Warehouse once I complete DP-203 and then I'll probably compare those two services.
Hi Tybul, thanks for the great content..any new videos available to members this week?
Hi, there is already a new episode available for "Data engineer" members - it was released on Monday.
@@TybulOnAzureis this different from Join button? How to become Data Engineer member?
@@torpatty You just have to click the Join button (or use this direct link: ua-cam.com/channels/LnXq-Fr-6rAsCitq9nYiGg.htmljoin) and then select the membership level. Currently there are two levels available: "Junior Data Engineer" and "Data Engineer".
Hello Piotr!
I have been following your DP 203 episodes from a month. The use cases taught are really informative and helps us learn from your vast experience in Data Engineering Field.
I have a request for you - Since 2024 Paris Olympics are right around the corner, i am thinking of a Data Streaming Project ,in which we can ingest Olympics data from APIs and transform in Azure cloud and save back to Data Lake. This project would help us in implementation of topics learnt in your episodes. Please guide me by providing the high level architecture of how this project can be implemented.
Thanks in Advance :)
Just curious...In the Spark Pools section, we actually copied the Minifigs table from the dedicated SQL pool itself and wrote it back to the same table in dedicated SQL pool but in the diagrammatic representation you have shown that the data is read from data lake and loaded to dedicated SQL pool...am I missing something? Thanks
You are right - I read data from dedicated SQL Pool so the diagram is misleading. Sorry for that.
@@TybulOnAzure thank you for the confirmation...appreciate it...awaiting your next series!!!
Czy zmiany w wymaganiach DP-203 po 25 lipca wpłyną na zawartość kursu?
Nie, bo te zmiany są kosmetyczne.
Hi Piotr,i just saw a comment you mention you still need 9-10 videos to complete dp-203 course could you please name the topics that are pending so that i can search online before my exam in case you not uploaded that videos i try to learn online
Sure, I plan to record the following episodes:
1. Additional features in Dedicated SQL Pool.
2. Security in Dedicated SQL Pool.
3. Synapse Serverless SQL Pool.
4. 3rd milestone - serving data.
5. Introduction to streaming.
6. Azure Stream Analytics + Event Hubs/IoT Hub.
7. Microsoft Purview.
8. 4th milestone - streaming & governance.
9. Exam overview & course wrap up.
@@TybulOnAzure thank you
Thanks for the reply
HDinsight is not covered in topics HDFS is needed for data engineer if we know all stuff that you tech to us
HDInsight is not listed in the official study guide.
How many videos are there to take azure data engineer exam
If you are asking about the number of episodes yet to be uploaded in scope of my DP-203 playlist, then the answer is: about 9-10.
@@TybulOnAzure enjoying crisp explanation..Any plan for other courses
@@santoshkumargouda6033 Right now I want to focus on finishing DP-203 and then on doing some shorter series/standalone episodes.
@@TybulOnAzure please make a project based Azure end to end solutions for DE
@@TybulOnAzure that's great
When do you usually upload the video
Every Tuesday
Yes, usually I upload new episode every Tuesday.