Thanks for the reply@rse_Academy , how do we get the input file name in fabric. I am trying with input fine name function but it's writing blank values.
Hi i seem to get this error when i run the query after creating the bronze view. Max iterations (20000) reached for batch Resolution, please set 'spark.sql.analyzer.maxIterations' to a larger value. Any help
Hey Vishnu, This was a great explanation for real time scenario. I have a scenario where i wanna recursively read multiple files and create a combined data frame before the bronze load. I tried different spark examples but not successful so far. Please let me know if you come across a solution on this. Thanks.
@@DataVerse_Academy Btw this problem applies to excel as input with multiple sheets. For other file types such as json, parquet, csv etc. we can utilize the solution below: df = spark.read.load('abfss path', format='csv', header=True, recursiveFileLookup=True ) display(df.limit(10))
Super helpful! Do you have a video that shows the silver layer with an example of joining related data from heterogenous data sources with data cleansing and deduplication? :D Still you are my hero Vishnu! Thank you for this video!
Hi , I have some complex "Scalar user defined functions" defined in MYSQL and I have to migrate them to fabric, but as of now fabric doesn't support creation of "Scalar user defined functions" in warehouse. In this scenario please let me know alternative options I can use. Thanks
you can build that logic inside the procedure. I know you will not able to return a value using a function, but you can build whatever the logic which you are trying to build. If you can give me context, then i will provide you the code as well
Nice video ,my question is all these things you showcased here can be done Azure Synapse then why Fabric considered here,is there anything synapse cant do here ? Whats striking difference that business should consider to use Fabric as the front runner ahead synapse in future?
In Microsoft fabric, everything is available at one place. You don’t need to create separate things. And one of most important thing is one lake, where everything is integrated. If you have multiple departments, for moving data from one department to another department you don’t need to create pipelines, by just providing access you can get the data.
Thanks for the real time scenarios , but at no point did you present the gold_product and the file isn't in the shared folder. thanks for your feedback.
Thanks for the video... Gold_Product is still not included in the Code zip file. Can you please include it? Not as Important, but at the same time, can you include the Run_Load notebook?
Hello Sir, After line no. 23 it is directly showing line no.77 .the middle part is skipped so not getting the code in between that. can you help with it.
Hi, Vishnu, This was a great video, I am getting error while using * after Sales. FileNotFoundError: [Errno 2] No such file or directory: Please help me on this.
Sir, How can we build the JDBC/Pyodbc connection between Fabric Data warehouse and Fabric Notebook. I have been finding it since a long time, but un-successful
1.Initially, we are getting data from multiple sources and sinking them in one warehouse (Raw Data). 2. Now we want to extract data from this warehouse (Raw Data) to another warehouse (Transformed Data) through a Notebook wherein we will be performing our transformation logic. Hence, I want to build the connection between warehouse and Notebook only using JDBC or Pyodbc
When i am clicking on New Semantic Model , I am not able to see all those tables to select a table or all tables . Because of that i am not able to create Semantic Model. Could you please help me here ? Thanks
@@DataVerse_Academy Thanks for your response. I am not getting any error, but I am not able to select any table to create my semantic model . Under Select all , it's not giving me tables name to select the table name
Please try below once Settings- admin portal- > tenant settings - > information protection -> allow users to apply sensitivity labels for content - enable this, Then you will be able to create semantic model through lakehouse
@@vinaypratapsingh5815 Hi, I am facing the same issue, only 3 tables shown for selection , others not shown to be used for semantic model creation. Were you able to solve this problem?
Hello sir, Thank you so much providing these productive videos. Today, I faced a challenge, and the solution I couldn't find elsewhere. That is How to Extract data from SAP Hana Cloud to Microsoft Fabric (cloud to cloud connectivity). Could you please help me here?
Hi Vishnu, when creating a Semantic Model - Fabric gives an error saying "Unexpected error dispatching create semantic model to portal" do you have any ideas why? Thanks
@@DataVerse_Academy Yes, I have Trial free account. Had two workspaces for pyspark training sessions and one your project. I had created a semantic model before when I did tutorials. but no luck on creating a semantic model with your project. 😒
Solution for this is, open Sql analytics endpoint instead of lakehouse, then you will be able to create the model. Microsoft have just recently changed some settings.
Another solution, Settings- admin portal- > tenant settings - > information protection -> allow users to apply sensitivity labels for content - enable this, Then you will be able to create semantic model through lakehouse
I have a SQL server stored procedure which updates, deletes and merges data into a table , how do I convert the stored procedure to pyspark job, is it possible to update a table in fabric using pyspark?, please make a video on this topic
It’s very easy to do the same thing in pyspark, we can do all the stuff which you mentioned. I am a on break for couple of months. I am going to start creating video very soon.
GREAT VIDEO PLEASE CONTINUE MORE
Excellent, great video..🎉
Good explanation 🙏🙏.
great one bro, keep uploading real time scenarios.
Sure, Thank you!
Thanks for the reply@rse_Academy , how do we get the input file name in fabric. I am trying with input fine name function but it's writing blank values.
this guy is a champion!! Thanks so much :):)
Great Video. Crisp and clear
Nice Video, It's helped me to understand M.Fabric flow.
Thank you! 😊
this is great video. thanks
Very good explanation.
Thank you 🙏
Really helpful. Thanks.
Great video, Thank you so much!!!!!!!!!!
Hi i seem to get this error when i run the query after creating the bronze view. Max iterations (20000) reached for batch Resolution, please set 'spark.sql.analyzer.maxIterations' to a larger value. Any help
Hey Vishnu, This was a great explanation for real time scenario. I have a scenario where i wanna recursively read multiple files and create a combined data frame before the bronze load. I tried different spark examples but not successful so far. Please let me know if you come across a solution on this. Thanks.
Sure, I will have a look.
@@DataVerse_Academy Btw this problem applies to excel as input with multiple sheets. For other file types such as json, parquet, csv etc. we can utilize the solution below:
df = spark.read.load('abfss path',
format='csv',
header=True,
recursiveFileLookup=True
)
display(df.limit(10))
Super helpful! Do you have a video that shows the silver layer with an example of joining related data from heterogenous data sources with data cleansing and deduplication? :D Still you are my hero Vishnu! Thank you for this video!
sir when i have ecommerce data in different domains and different system then how i can import.
Nice. quick question, the Presentation slide shown for Architecture is Power Point or any other software?
It’s power point.
Hi ,
I have some complex "Scalar user defined functions" defined in MYSQL and I have to migrate them to fabric, but as of now fabric doesn't support creation of "Scalar user defined functions" in warehouse. In this scenario please let me know alternative options I can use.
Thanks
you can build that logic inside the procedure. I know you will not able to return a value using a function, but you can build whatever the logic which you are trying to build.
If you can give me context, then i will provide you the code as well
Nice video ,my question is all these things you showcased here can be done Azure Synapse then why Fabric considered here,is there anything synapse cant do here ? Whats striking difference that business should consider to use Fabric as the front runner ahead synapse in future?
In Microsoft fabric, everything is available at one place. You don’t need to create separate things. And one of most important thing is one lake, where everything is integrated. If you have multiple departments, for moving data from one department to another department you don’t need to create pipelines, by just providing access you can get the data.
Can you elaborate a little more in comparison with Azure synapse
Thanks for the real time scenarios , but at no point did you present the gold_product and the file isn't in the shared folder. thanks for your feedback.
For Gold_Prodcut- check the video “at 54:34 Loading Product Dimension - Gold Layer”.
For files, please check the link in the description. Download the whole folder.
Thanks for the video...
Gold_Product is still not included in the Code zip file.
Can you please include it?
Not as Important, but at the same time, can you include the Run_Load notebook?
Hello Sir,
After line no. 23 it is directly showing line no.77 .the middle part is skipped so not getting the code in between that. can you help with it.
Excellent !!!!! Do you have this type of video for SCD2 ?
I think for the DIM Merges just wrap the merge inside an Insert Into and change the Update of the Merge Accordingly.
Where can I get more such data source
Hi, Vishnu, This was a great video, I am getting error while using * after Sales.
FileNotFoundError: [Errno 2] No such file or directory:
Please help me on this.
sir,cant this project be done in free version of microsoft fabric?
It is only done in free version.
Sir,
How can we build the JDBC/Pyodbc connection between Fabric Data warehouse and Fabric Notebook.
I have been finding it since a long time, but un-successful
But why do you need it, what is the use case which you are trying to implement?
1.Initially, we are getting data from multiple sources and sinking them in one warehouse (Raw Data).
2. Now we want to extract data from this warehouse (Raw Data) to another warehouse (Transformed Data) through a Notebook wherein we will be performing our transformation logic.
Hence, I want to build the connection between warehouse and Notebook only using JDBC or Pyodbc
When i am clicking on New Semantic Model , I am not able to see all those tables to select a table or all tables .
Because of that i am not able to create Semantic Model.
Could you please help me here ?
Thanks
Whats the error you are getting ?
@@DataVerse_Academy Thanks for your response.
I am not getting any error, but I am not able to select any table to create my semantic model
. Under Select all , it's not giving me tables name to select the table name
Please try below once
Settings- admin portal- > tenant settings - > information protection -> allow users to apply sensitivity labels for content - enable this,
Then you will be able to create semantic model through lakehouse
@@vinaypratapsingh5815 Hi, I am facing the same issue, only 3 tables shown for selection , others not shown to be used for semantic model creation. Were you able to solve this problem?
Hello sir,
Thank you so much providing these productive videos.
Today, I faced a challenge, and the solution I couldn't find elsewhere.
That is
How to Extract data from SAP Hana Cloud to Microsoft Fabric (cloud to cloud connectivity). Could you please help me here?
product script is missing in data code file please upload it
Hi Vishnu, when creating a Semantic Model - Fabric gives an error saying "Unexpected error dispatching create semantic model to portal" do you have any ideas why? Thanks
Do you have all the required access to create a semantic model in workspace?
@@DataVerse_Academy Yes, I have Trial free account. Had two workspaces for pyspark training sessions and one your project. I had created a semantic model before when I did tutorials. but no luck on creating a semantic model with your project. 😒
Solution for this is, open Sql analytics endpoint instead of lakehouse, then you will be able to create the model.
Microsoft have just recently changed some settings.
Another solution,
Settings- admin portal- > tenant settings - > information protection -> allow users to apply sensitivity labels for content - enable this,
Then you will be able to create semantic model through lakehouse
I have a SQL server stored procedure which updates, deletes and merges data into a table , how do I convert the stored procedure to pyspark job, is it possible to update a table in fabric using pyspark?, please make a video on this topic
It’s very easy to do the same thing in pyspark, we can do all the stuff which you mentioned. I am a on break for couple of months. I am going to start creating video very soon.
@@DataVerse_Academy please do create a video when you are back from break. Thanks
why did you create two folder "current" and "archive" in Files
To archive the processed file from current to archive folder.
Thank you for answer
@@DataVerse_Academy I have one more question: When to use spark.read.table() and spark.sql
Where can I get the files from to follow along.
It’s there in the description.
You can find the code in the same folder.
Hey , This is great Video but I am not able to open the code files you have given. I am missing the code you have used
Hello, thanks a lot for the tutorial !
But you just forgot to upload Gold_Product code in the zip file, can you upload it? Thanks
Sure
@@DataVerse_Academy upload it bro
Is this completely free course
Yes