You've just heightened my desire to adopt MS Fabric as the primary tool/platform on my new journey to Data Engineering. This tutorial was as smooth, unambiguous and interesting as anything should be. Thank you.
Thank you. Many projects/videos tell us to schedule data pipeline for latest data but not show. You did it. I'm glad. This is really useful for me. Also you explain things really well.
Great content, congratulations! For future videos on Microsoft Fabric, it would be interesting to see a logical update process instead of append and also the formation of a star schema between layers.
very good content, you earned a new subscriber. Could you please just explain the part where you set a filter for start date before appending to the gold layer, you say it's for avoiding duplication but I'm not sure I understand what you mean by this.
Hi thanks for your excellent training video. I followed step by step but when I wanted to read json file in Bronze layer I faced this error" Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the referenced columns only include the internal corrupt record column (named _corrupt_record by default). For example: spark.read.schema(schema).csv(file).filter($"_corrupt_record".isNotNull).count() and spark.read.schema(schema).csv(file).select("_corrupt_record").show(). Instead, you can cache or save the parsed results and then send the same query. For example, val df = spark.read.schema(schema).csv(file).cache() and then df.filter($"_corrupt_record".isNotNull).count()." thanks for your help in advance
Can you please let me know licensing cost for fabric PowerBI for below users for users who will create the report using copilot. for users who will just do prompts and derive insights using copilot...
This probably means you’re not an admin in your Fabric workspace, if you’re using your organisational account you’ll need to speak to your team to assign you the relevant permissions
@@pathfinder-analytics I've utilized the free trial of Fabric based on your previous video- Thank you! I am still not able to access the map visuals setting, though. Do you know if it is possible on the Fabric free trial? Thanks.
Excellent explanation with clear examples how to implement end to end data flow from source thru Pipeline to the PBI Visual. THANK YOU
You've just heightened my desire to adopt MS Fabric as the primary tool/platform on my new journey to Data Engineering. This tutorial was as smooth, unambiguous and interesting as anything should be. Thank you.
This work is worth more than rubies and gold. Keep it up, bro! Thank you.
@peterodedeyi3366
i have not used fabric yet
I just want to know whether they charge or i can do this on free trail
Excellent. Highly appreciated
Thank you. Many projects/videos tell us to schedule data pipeline for latest data but not show. You did it. I'm glad. This is really useful for me. Also you explain things really well.
Thank you for the kind words
Q
Insightful. Thanks a lot for the work and the sharing!
Amazing I will definitely try this project in Microsoft Fabric step by step... Thanks for sharing very useful..😊😊..Keep sharing..🤟🤟.
Awesome. Thanks for end- to end project. Need more like this. It realy widen my knowledge horizon. thank you very much. Looking for more like these.
I'm glad you found it helpful 👍
Great content, congratulations!
For future videos on Microsoft Fabric, it would be interesting to see a logical update process instead of append and also the formation of a star schema between layers.
very good content, you earned a new subscriber. Could you please just explain the part where you set a filter for start date before appending to the gold layer, you say it's for avoiding duplication but I'm not sure I understand what you mean by this.
Fantastic intro video, thx!!!
Thanks!
Hi thanks for your excellent training video. I followed step by step but when I wanted to read json file in Bronze layer I faced this error" Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default). For example:
spark.read.schema(schema).csv(file).filter($"_corrupt_record".isNotNull).count()
and spark.read.schema(schema).csv(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the same query.
For example, val df = spark.read.schema(schema).csv(file).cache() and then
df.filter($"_corrupt_record".isNotNull).count()." thanks for your help in advance
Can you please let me know licensing cost for fabric PowerBI for below users
for users who will create the report using copilot.
for users who will just do prompts and derive insights using copilot...
Great video. I am subscribing. Thank you. However, I am not seeing the tables when I go to SQL. Any suggestions?
Hi, I think this is currently a bug with the schema enabled lakehouses. I understand that this is currently an open ticket.
Superr video, useful as always...😊
Thank you!
i have not used fabric yet
I just want to know whether they charge or i can do this on free trail
Could you please tell me what are a few pre-requisites to know before watching this video ? Thanks.
I discuss the pre-requisites at 2:16
@@pathfinder-analytics Thanks
i dont see tenant setting in admin portal.. i see only 3 options in admin portal ----1.capacity settings 2.refresh summary 3.help+support
This probably means you’re not an admin in your Fabric workspace, if you’re using your organisational account you’ll need to speak to your team to assign you the relevant permissions
@@pathfinder-analytics I've utilized the free trial of Fabric based on your previous video- Thank you! I am still not able to access the map visuals setting, though. Do you know if it is possible on the Fabric free trial? Thanks.
same for me and I am also using free trail account
Excellent. Highly appreciated