2:03:33 I had the same issue, but instead of enabling All IP's in Synapse, I went to the SQL Database and under the "Set server firewall" option made sure "Allow Azure services and resources to access this server" was set to true. I'm sure you already knew this, but I also know in the midst of a live demo we are all quick to do what comes to mind first. ;)
2:03:00 Oh no! you practically disabled the firewall? That hurt my feelings very bad. As an Infrastructure Administrator Associate & an Aspiring Solution Architect Expert, I am terribly broken-hearted now. I am crying now 😂
Great stuff, thank you! One question, how can we do real time sync with external table with application operational data using event handlers like azure function or web hooks. I want to push real time data from my daily operational data stores to external tables
Thanks for the video. A couple of humble suggestions. You kept saying “we are gonna talk about that…”, “ I am gonna talk about that…” half way through the videos. Set the agenda briefly at the beginning and talk about what you need to talk about. Another one is you assume all of your audience is skilled at azure products set. “You see this is similar to that…” was said many times. Anyways, good video
its very nice! unfortunately dataset isnt shared and no way to actually practice. i checked on microsoft toand theres a similar dataset but the thing is broken and doesnt contain a single parquet
As a developer, I find the 'cost' is creeping in at all levels! It is such a shame that MS has now got us by our nuts! We are developers NOT accountants!
It almost seems easier ( as you explained) to use spark to transform/manipulate the dataframes rather than creating a pipeline that would consist of several steps to complete.
Hello,Thanks for this amazing video. I’ll love to implement this from my own end. Can I get the files link or something used for this. I’d appreciate your response thanks
Good video thanks at 1:57:00 you use a pipeline to copy data, might be my personal preference but I hate using GUIs when I could just write the copy command on Synapse and execute directly, so why is this approach preferred?.
Hi @pragmatic works .. Session is great . I have one doubt when you are partitioning the data into separate months , does this means you are also creating its copy . For Example if data in training was 100MB and you partitioned it into separate months into training_output 100MB data is getting divided also and now you have 200MB of data ?
I'm new to azure. I find the video very helpful. I have a doubt regarding my project. We have 2 sql Dedicated instance 1 for Actually Control ADF and 2 for Sql Object Is there any way we can connect 2 instance of sql dedicated Instance like what we have in SQL : Linked Server, Replication, AlwatsON , etc ?
Thanks Mitchell for this brilliant session. Will one be able to run 1000+ reports/queries at the same time against dedicated SQL pools, why I ask this is cause I believe there is a limitation on concurrency. And is there such limitation on serverless pool? Also you have mentioned that one doesn't have to provision a SQL pool if concurrency is an issue or requirements are not aligned to synapse. I am bit confused with MPP and concurrency, I would have thought massively parallel processing means one can run many queries at a time, could you please help in understanding this better.
Is it possible to Live Sync Azure Synapse to SQL DB for SSRS reporting? Or any alternative for it. There are 200+ SSRS reports are running using SQL DB Data Export from Dynamics 365. Can any one help here for a solution?
Great Video. Do companies also use Synapse serverless pool for datawarehousing? with external tables and stuff or could you rather use an sql database or dedicated pool. I work mostly with small companies that have max 1,5 TB data and use mostly not unstructured data. And how about incremental loads. I see in every example you did, it's a full load. Can you also do incremental loads to .parquet files for example? if so how? if you have the time i'd love to hear your story.
I'm new to azure. I find the video very helpful. I have a doubt regarding my project. I want to do analysis on streaming data using py-spark, pandas (as for using sql we use stream analytics). HD insight is one of the options but it costs per cluster. Is it possible to achieve it by either using databricks or synapse analytics?
Great stuff. And I have the following question: around 1:15:00 you create external table so that there is no need to use openrowset syntax. If I want to still benefit from partitioning, would I need to create external table per partition¿
I guess yes you still benefit from partitioning since the `Select Top 100 * From TaxiData` query is simply "syntax sugar" on top of the "real query " which uses openrowset behind the scenes with partitioning applied. Remember he mentioned that the TaxiData Exernal Table does not store any "real data" besides metadata. So I can write `Select Top 100 * From TaxiData` and the engine will translate this "syntax sugar" into the "real query that has openrowset in it" and only scan my month1 partition. I hope I got it right?😊
Hi @pragmatic Works The demo given here is great. I am new to synapse can you please help me how to import schema from cdm manifest files combine it with csv to copy data. I am trying to import data from Data lake where data has been exported from d365fo. But it contains header less csv files and manifest.cdm files which contains schema. I want to create view or external table in serverless sql pool.
30 minutes into the course and I am already happy with this presentation. Thank you
Glad you enjoyed! Thanks for watching!
Gifted , humble and generous giver. thanks !
Thanks for watching!
Amazing learning material for a data engineer / scientist. Highly recommended
This is absolutely gem of a session and helps to bring a lot of clarity about ASA, huge thanks for sharing it! 🙌
Glad it was helpful!
this is onee of the best training videos that i have seen n youtube
2:03:33 I had the same issue, but instead of enabling All IP's in Synapse, I went to the SQL Database and under the "Set server firewall" option made sure "Allow Azure services and resources to access this server" was set to true. I'm sure you already knew this, but I also know in the midst of a live demo we are all quick to do what comes to mind first. ;)
WoW! This video is heavily enriched with information. It took me 3 days to finish it up! :)
such a valuble information in online.....at free of cost thank u
Glad you enjoyed it!
Thanks Sir, you video gives a comprehensive overview on ASA
You are most welcome!
Best video on synapse
Mitch, you are the best!!!. Another best video , thank you!!
Excellent presentation. Guy is born trainer.
Thank you!
What a great video to watch. I feel like I need to pay.
Thanks for watching!
Thank you so much for this video. This was very helpful for me to get an understanding of how to use ASA.
Glad you enjoyed! Thanks!
Love your camera smile at the Very beginning 😚
This was great, thank you so much Mitchell Pearson and Matt!
Glad you enjoyed it!
What a great course - thank you, it was very instructive and covered a lot of ASA topics !
Glad you enjoyed it!
The complete restructure of Data Lake using only one line of code was crazy!
Glad you enjoyed!
Mitchell is a great great trainer.🤗🤗
Thank you! Glad that you enjoyed! :)
Great presentation....really loved it
Brilliant workshop 👍 live debugging made it even more useful 👏
Glad it helped!
Amazing training session. Thank you
Excellent Workshop
Glad you enjoyed it!
Brilliant tutorial, comprehensive and well explained thanks you!
Glad you enjoyed it!
This was enjoyable to watch. Thanks!
Glad to hear it!
Huge thanks for this explaination it really helped me to understand ASA concept with ADF.
What a great video!! Thanks for making it
Mitch is an amazing instructor!
2:03:00 Oh no! you practically disabled the firewall? That hurt my feelings very bad. As an Infrastructure Administrator Associate & an Aspiring Solution Architect Expert, I am terribly broken-hearted now. I am crying now 😂
Another great video series, great work!
Learned so much, thanks, very good explained
Awesome sir !! Keep up the great work 👍
This was so good🙌
Thanks for watching!
Great stuff, thank you! One question, how can we do real time sync with external table with application operational data using event handlers like azure function or web hooks. I want to push real time data from my daily operational data stores to external tables
Thanks for the video. A couple of humble suggestions. You kept saying “we are gonna talk about that…”, “ I am gonna talk about that…” half way through the videos. Set the agenda briefly at the beginning and talk about what you need to talk about. Another one is you assume all of your audience is skilled at azure products set. “You see this is similar to that…” was said many times.
Anyways, good video
Amazing. Thanks for this.
Wonderful session, thanks for putting this together and presenting it so nicely!
Thank you! Glad you enjoyed it and thanks for watching! -Mitchell Pearson
its very nice! unfortunately dataset isnt shared and no way to actually practice. i checked on microsoft toand theres a similar dataset but the thing is broken and doesnt contain a single parquet
Did u end up finding the dataset? I just found this video, the explanation is pretty good but I too wanted to work on it while watching the video
I don't comment usually but man, that was very helpful!
As a developer, I find the 'cost' is creeping in at all levels! It is such a shame that MS has now got us by our nuts! We are developers NOT accountants!
It almost seems easier ( as you explained) to use spark to transform/manipulate the dataframes rather than creating a pipeline that would consist of several steps to complete.
Very useful..Thanks.
Glad it was helpful!
Great video.
Excellent event! Thank you for sharing.
Glad you enjoyed it!
So i wanna work with him while watching the video. Where can I get the Training data that you already had loaded in your synapse studio?
Super helpful - Thank you so much !! #StayBlessednHappy
Thank you! Glad you enjoyed!
When should data factory be used since synapse can do ETL too?
I already have On-Demand Subscription. I do not see any course on Synapse there?
Great video. Is it possible to share the sample files (Taxi data & Internet Sales) ?
Hello,Thanks for this amazing video. I’ll love to implement this from my own end. Can I get the files link or something used for this. I’d appreciate your response thanks
Great session, Thank you!
Glad you enjoyed it!
This was great. Thank you for sharing this.
Glad it was helpful!
It's awesome! Thank you!
Good video thanks
at 1:57:00 you use a pipeline to copy data, might be my personal preference but I hate using GUIs when I could just write the copy command on Synapse and execute directly, so why is this approach preferred?.
The actual Synapse demo/walkthrough starts at 24:00
Still i did not get what is Azure Dedicated SQL pool, Is it a database ?
Hey thank you for this course, could you tell me if the files are available for downlaod?
Does the Serverless Pool equate to the main feature of "auto-scaling" from Snowflake??
do you have a devops course to get a microsoft certification?
Hi @pragmatic works .. Session is great . I have one doubt when you are partitioning the data into separate months , does this means you are also creating its copy . For Example if data in training was 100MB and you partitioned it into separate months into training_output 100MB data is getting divided also and now you have 200MB of data ?
thanks got my answer later in the video
Great video ! where can I find the files used in the demo ?
Is Azure Synapse just a way to say Azure with all its services or is it some third party way of interacting with Azure services?
I'm new to azure. I find the video very helpful. I have a doubt regarding my project.
We have 2 sql Dedicated instance
1 for Actually Control ADF and 2 for Sql Object
Is there any way we can connect 2 instance of sql dedicated Instance like what we have in SQL : Linked Server, Replication, AlwatsON , etc ?
Hi, the link to the bootcamp seems to be invalid
Great work !!!!
Great session, where can we get the deck.
Very concise presentation. Quick question - when yoiiu use spark on synapse, what is the metastore that spark uses to create tables?
Thanks Mitchell for this brilliant session.
Will one be able to run 1000+ reports/queries at the same time against dedicated SQL pools, why I ask this is cause I believe there is a limitation on concurrency.
And is there such limitation on serverless pool?
Also you have mentioned that one doesn't have to provision a SQL pool if concurrency is an issue or requirements are not aligned to synapse. I am bit confused with MPP and concurrency, I would have thought massively parallel processing means one can run many queries at a time, could you please help in understanding this better.
Jjjj.. Ss jee
1:15:35 how much data have been read from the DL? all?
Fantastic communication ! Thank you for that . I will start to follow your videos. All the best .
Subscribed...
We are a Tableau shop. How does Power BI compares to Tableau? Can we use the combination of Synapse with Tableau?
Thanks
Good
Is it possible to Live Sync Azure Synapse to SQL DB for SSRS reporting? Or any alternative for it. There are 200+ SSRS reports are running using SQL DB Data Export from Dynamics 365. Can any one help here for a solution?
Great Video. Do companies also use Synapse serverless pool for datawarehousing? with external tables and stuff or could you rather use an sql database or dedicated pool. I work mostly with small companies that have max 1,5 TB data and use mostly not unstructured data. And how about incremental loads. I see in every example you did, it's a full load. Can you also do incremental loads to .parquet files for example? if so how? if you have the time i'd love to hear your story.
Thanks
Thanks so much
is there any way to get the slides?
Excellent videos sir. Please share some windows servers 2016/2019 full videos
oh God, you dont share your training scrips nowhere? how student can really practice them?
without that i am not going to watch further
I'm new to azure. I find the video very helpful. I have a doubt regarding my project.
I want to do analysis on streaming data using py-spark, pandas (as for using sql we use stream analytics). HD insight is one of the options but it costs per cluster. Is it possible to achieve it by either using databricks or synapse analytics?
If you're using Azure Synapse, do you need Azure Data Factory?
where find the "holiday.snappy.parquet " file plz?
Do u offer live trainings
Great stuff. And I have the following question: around 1:15:00 you create external table so that there is no need to use openrowset syntax. If I want to still benefit from partitioning, would I need to create external table per partition¿
I guess yes you still benefit from partitioning since the `Select Top 100 * From TaxiData` query is simply "syntax sugar" on top of the "real query " which uses openrowset behind the scenes with partitioning applied. Remember he mentioned that the TaxiData Exernal Table does not store any "real data" besides metadata. So I can write `Select Top 100 * From TaxiData` and the engine will translate this "syntax sugar" into the "real query that has openrowset in it" and only scan my month1 partition. I hope I got it right?😊
Hi @pragmatic Works
The demo given here is great. I am new to synapse can you please help me how to import schema from cdm manifest files combine it with csv to copy data. I am trying to import data from Data lake where data has been exported from d365fo. But it contains header less csv files and manifest.cdm files which contains schema. I want to create view or external table in serverless sql pool.
that opening 5s! xD
Great
24:10 Azure synapse analytics demo
Where's the course on virtual networks mentioned at 31:02?
ua-cam.com/video/TkLT4HWd558/v-deo.html
A [Full Course] without CI/CD ??? 😕
👆👆👆👆If the world has more people like then it will definitely to be a better
Place..... thanks so much sir you're best 💯 (his name above) up