Azure Synapse Analytics - Automating Serverless SQL Views
Вставка
- Опубліковано 5 жов 2024
- We're seeing a common pattern emerging of people creating views over openrowset queries so they can then query files in a lake directly from BI Tools like Power BI, using Synapse Serverless Pools - it's a great pattern but people are hand-cranking the view creation... and that's no good.
In this video Simon talks through how you would go about creating views programmatically, using Serverless SQL Stored Procs & Dynamic SQL to build up the functionality you need to automate this arduous task.
Details around Serverless Stored Procs can be found here: docs.microsoft...
If you're getting started with Azure Synapse Analytics, check out advancinganalytics.co.uk/training to see how we can help you on your journey!
Hmm, this means I want to learn more a lot.
Thanks so much.
Joining the Data world makes me learning every minute in my life 😫
Superb video !!! Do you have a video for the delta lake creation !?
Would like this too
I think you can run something like EXEC sp_describe_first_result_set @tsql (minus the create or alter view) and the serverless pool will at least give you the inferred types back. However, I think the inference is still not ideal.
how do we create views from azure dev ops pipeline instead of a data factory?
Excellent video! I developed a solution in which I built external tables on the Serverless pool, but I'm not sure whether I should expose my data using Views or External Tables. Do you have any recommendations for when I should use one over the other?
6 months later I have the same question.
I will create a view in top of the external table
Can we create a view by joining external tables?
creare a view on each external table and join them after
@@valentinloghin4004 oh okay. So it's becoming like 3 step process..
External table then view and another view. Is there a better approach than 3 step process?
amazing! , so, with store procedures running under sql serverless you can save a lot of money on queries, which users would never do and you would still be spending money waiting for them. what is the limitation of this? the scale of having more processing? sql pool will allow me to add more pools if I need it but not sql on demand?
Will synapse take over azure databricks for azure customers?
Hi Really great idea!! DO you have those scripts?
Great! - Is there any way we can manage the inferred schema to assign proper data types out of data files and reflect those in SQL view? (Instead of accepting all varchar(8000) ?
Can I connect to Azure synapse (Dedicated Pool) using Data studio and run PySpark commands on the Synapse tables?
How can we pass multiple values to single parameter through the pipeline. For eg - I will to create 5 views and want to pass those 5 view names via pipeline, so that stored proc gets executed 5 times to create those 5 views. How can we do that?
You can for example with the for each loop & than add a list of paramters for each view
He covers it at 25:34.
The video quality is really bad.
Sooo I shared the link before HD had finished processing. Give it 20 mins and you should have better options available!
Simon
About to ask if were only me that get the 360p only.
@@allruiz it usually means the vid is really fresh. HD processing haven't finished yet.
2 hours later, HD still processing... we'll get there one day folks!
Assuming everything is a varchar(max) isn't ok. The query estimate will prevent the view from working on wide long tables. Answer is to provide the correct schema in the open row set. How could you get the correct schema out of be the parquet files?