I have been searching for these sessions for a long time and my search Ends here. Thanks a lot Maheer brother , you are just amazing. I will surely mention your channel on linkdin once I clear my DP 203. Thanks again
Hi Small query hoping for early reply.. I have a blob storage in which we have parquet file and file names are as file1.parquet, file1_old.parquet, file2.parquet, file2_old.parquet .... So on. I want only the file1.parquet to be picked in Data flow and ignore _old parquet file I have tried wildcard as file?. parquet in source setting But it doesn't pick file10.parquet two digit. So i tried file??. parquet still no luck. Could you please help on this ASAP
Hello Roja Maybe you can use case statment if file name like file*_old.parquet then ignore elseif it is like file*.parquet then get that file. Hopefully it will help :)
For some reason when I try to run this code in the pyspark notebook I get an error: code: %%pyspark df = spark.read.load('[filepath]', format='parquet') display(df.limit(10)) Error: AVAILABLE_WORKSPACE_CAPACITY_EXCEEDED: I setup the spark nodes to be 3-3 and small which you did in another video, but now I can't go through this exercise
@@WafaStudies thank you for the response, sir. 🙏 Can I ask another question. Since Azure is all paid for service, how would you suggest I create an Synapse practice project without spending too much money for the purpose of building a portfolio
Very Informative Video Maheer. I have one question can we connect multiple storage accounts to the Azure Synapse Analytics Workspace? So that we can store our files safely. I saw in this demo without giving the storage account name you were able to write the file in the attached storage account. Please share your opinion.
Suppose client data is ingested into the Storage account 1 and while creating the synapse workspace we have used storage account 2 , then what is our approach to get the data from storage account 1 into the synapse analysis for processing and transformation.
I have been searching for these sessions for a long time and my search Ends here. Thanks a lot Maheer brother , you are just amazing. I will surely mention your channel on linkdin once I clear my DP 203. Thanks again
Really I am loving your sessions
Even first time I am learning synapse, conceptually everything fits in my mind
Appreciate your attitude of teaching
Thank you 😊
Amazing videos, I feel like story telling and able to understand quickly. Thanks for your effort and time. Great Job!
Wonderful. I'm enjoying the series so far.
Thank you again for the amazing explanation.
Thank you ☺️
Excellent work. Keep the spirit high.
Thank you ☺️
Please also include predictive analytics using AI n ML. Also Power BI reporting in Synapse
Top series . just what i need as a starter, thanks!
Excellent Channel. Easy to Understand. Thank you very much.
Welcome 🤗
Great work my Azure Rockstar.. 👍
Thank you 🙂
Great work as usual..👍
Thank you 😊
Great work..Thank you very much.
Thank you 😊
Why I am getting storage explorer error while navigating through the adls gen 2 storage in linked section of azure synapse studio?
Great series. 👍
Thank you 😊
Very good. Great job! Thank you !
hey ! can you please attach the presentation link in the description. would be easier to revise afterwords! thanks
How can we analyse the data in storage using KQL Script
Hi
Small query hoping for early reply..
I have a blob storage in which we have parquet file and file names are as file1.parquet, file1_old.parquet, file2.parquet, file2_old.parquet .... So on. I want only the file1.parquet to be picked in Data flow and ignore _old parquet file
I have tried wildcard as file?. parquet in source setting But it doesn't pick file10.parquet two digit. So i tried file??. parquet still no luck. Could you please help on this ASAP
Hello Roja
Maybe you can use case statment if file name like file*_old.parquet then ignore elseif it is like file*.parquet then get that file. Hopefully it will help :)
We don't require creating Mountings for our adls gen2 container to read the data in synapse analytics??
We need it if storages have some access related restrictions. If u process in the playlist, you will see how yo achieve that.
could you show how to load data with parameters in ADSL for example based on date it only loads the folder for current month not other months
Thanks bro
Welcome 😄
In the database why is the database created in built in not seen
For some reason when I try to run this code in the pyspark notebook I get an error:
code:
%%pyspark
df = spark.read.load('[filepath]', format='parquet')
display(df.limit(10))
Error:
AVAILABLE_WORKSPACE_CAPACITY_EXCEEDED:
I setup the spark nodes to be 3-3 and small which you did in another video, but now I can't go through this exercise
It ran perfectly fine in the serverless SQL pool though coincidentally
Ok I ran it again on the first notebook created in an older video it worked so idk what the issue was, possible user error typo, idk.
It may b intermittent issue.
@@WafaStudies thank you for the response, sir. 🙏 Can I ask another question. Since Azure is all paid for service, how would you suggest I create an Synapse practice project without spending too much money for the purpose of building a portfolio
Available compute capacity exceeded: Livy session has failed.your job requested 12 vcores . however the pool has 0 vcores
How can I solve this problem
How to load data using sql
Very Informative Video Maheer. I have one question can we connect multiple storage accounts to the Azure Synapse Analytics Workspace? So that we can store our files safely. I saw in this demo without giving the storage account name you were able to write the file in the attached storage account. Please share your opinion.
Suppose client data is ingested into the Storage account 1 and while creating the synapse workspace we have used storage account 2 , then what is our approach to get the data from storage account 1 into the synapse analysis for processing and transformation.
💥💥🌀💥💥
Thanks bro
Welcome 😁