![KratosBI](/img/default-banner.jpg)
- 648
- 590 982
KratosBI
United States
Приєднався 28 лип 2019
Welcome to Kratos BI! 🌟 Your premier destination for mastering Microsoft Fabric and Power BI. With over two decades of data expertise, we bring you the latest insights, tips, and tricks to transform your data into compelling stories.
📊 What We Offer:
In-depth tutorials on Power BI and Microsoft Fabric
Real-world applications and case studies
Expert advice on data visualization and storytelling
Live Q&A sessions to answer your burning questions
Community-driven content tailored to your needs
🔍 Why Subscribe? Stay ahead of the curve with our cutting-edge content, designed for data professionals by a data professional. Whether you’re a beginner or an expert, you’ll find valuable resources to enhance your skills and make an impact with your data.
🔗 Connect with Us:
Join our community of data enthusiasts
Share your experiences and learn from peers
Get exclusive access to resources and downloads
✅ Subscribe now and unlock the full potential of your data with Kratos BI!
📊 What We Offer:
In-depth tutorials on Power BI and Microsoft Fabric
Real-world applications and case studies
Expert advice on data visualization and storytelling
Live Q&A sessions to answer your burning questions
Community-driven content tailored to your needs
🔍 Why Subscribe? Stay ahead of the curve with our cutting-edge content, designed for data professionals by a data professional. Whether you’re a beginner or an expert, you’ll find valuable resources to enhance your skills and make an impact with your data.
🔗 Connect with Us:
Join our community of data enthusiasts
Share your experiences and learn from peers
Get exclusive access to resources and downloads
✅ Subscribe now and unlock the full potential of your data with Kratos BI!
WHY are there DAX Columns? #MicrosoftFabric #PowerBI
WHY are there DAX Columns? #MicrosoftFabric #PowerBI
Переглядів: 141
Відео
What ARE DAX Columns? #MicrosoftFabric #PowerBI
Переглядів 23912 годин тому
What ARE DAX Columns? #MicrosoftFabric #PowerBI
Transformers! SQL vs Python vs Power Query??? #MicrosoftFabric
Переглядів 46721 день тому
What language do you choose when looking at transforming data? Power Query? Python? SQL? This video answers this question and helps reassure you on your course of action!
But WHY do you Scale Out??? #MicrosoftFabric #PowerBI
Переглядів 18621 день тому
Answering the question "Why should we scale out in Power BI???"
Excel to Power BI with Automatic Refresh
Переглядів 722Місяць тому
Unlock the full potential of your data with our latest tutorial on KratosBI! Learn how to seamlessly transition from Excel to Power BI and set up automatic refreshes to keep your insights up-to-date. This tutorial is perfect for professionals and enthusiasts alike who want to enhance their data analytics skills. 📊 What You’ll Learn: How to import Excel data into Power BI. Setting up automatic d...
End to End SQL to Fabric Build - 42 Minutes!!!
Переглядів 791Місяць тому
End to End SQL to Fabric Build - 42 Minutes!!!
End to End Best Practices in Microsoft Fabric
Переглядів 837Місяць тому
End to End Best Practices in Microsoft Fabric
Crafting an End-to-End Microsoft Fabric Solution
Переглядів 682Місяць тому
Crafting an End-to-End Microsoft Fabric Solution
Monitoring Hub Usage and Needs in #MicrosoftFabric
Переглядів 2522 місяці тому
Monitoring Hub Usage and Needs in #MicrosoftFabric
Episode 24: Semantic Model Security - Safeguarding Your Data in Microsoft Fabric
Переглядів 1892 місяці тому
Episode 24: Semantic Model Security - Safeguarding Your Data in Microsoft Fabric
Episode 23: Q&A Settings - Enhancing Interactivity in Microsoft Fabric
Переглядів 632 місяці тому
Episode 23: Q&A Settings - Enhancing Interactivity in Microsoft Fabric
Episode 22: Template App Settings - Simplifying App Creation in Microsoft Fabric
Переглядів 992 місяці тому
Episode 22: Template App Settings - Simplifying App Creation in Microsoft Fabric
Episode 21: Gen1 Dataflow Settings - Optimizing Data Management in Microsoft Fabric
Переглядів 922 місяці тому
Episode 21: Gen1 Dataflow Settings - Optimizing Data Management in Microsoft Fabric
Episode 20: Admin API Settings - Streamlining Administration in Microsoft Fabric
Переглядів 922 місяці тому
Episode 20: Admin API Settings - Streamlining Administration in Microsoft Fabric
Accelerate Your Data Movement with Microsoft Fabric Copy Assist
Переглядів 5012 місяці тому
Accelerate Your Data Movement with Microsoft Fabric Copy Assist
Episode 19: Developer Settings - Empowering Creators in Microsoft Fabric
Переглядів 832 місяці тому
Episode 19: Developer Settings - Empowering Creators in Microsoft Fabric
Episode 18: Dashboard Settings - Is this the MOST dangerous Tenant Setting of Microsoft Fabric?
Переглядів 1153 місяці тому
Episode 18: Dashboard Settings - Is this the MOST dangerous Tenant Setting of Microsoft Fabric?
Episode 17: Audit and Usage Settings - Keeping Track in Microsoft Fabric
Переглядів 983 місяці тому
Episode 17: Audit and Usage Settings - Keeping Track in Microsoft Fabric
4 Ways to find the RIGHT report in Power BI
Переглядів 2483 місяці тому
4 Ways to find the RIGHT report in Power BI
Episode 16: R and Python Visual Settings - Unleashing the Power of Coding
Переглядів 783 місяці тому
Episode 16: R and Python Visual Settings - Unleashing the Power of Coding
How to Create a Fabric Entra ID Admin Group
Переглядів 2183 місяці тому
How to Create a Fabric Entra ID Admin Group
Data Storytelling - UPDATE - #PowerBI and #PowerPoint
Переглядів 2633 місяці тому
Data Storytelling - UPDATE - #PowerBI and #PowerPoint
Copilot in Power BI Desktop First Look
Переглядів 3 тис.3 місяці тому
Copilot in Power BI Desktop First Look
Episode 15: Power BI Visuals - Enhancing Your Microsoft Fabric Experience
Переглядів 1353 місяці тому
Episode 15: Power BI Visuals - Enhancing Your Microsoft Fabric Experience
Unlock Seamless Data Integration: Snowflake Mirroring in Microsoft Fabric Explained!
Переглядів 1,3 тис.3 місяці тому
Unlock Seamless Data Integration: Snowflake Mirroring in Microsoft Fabric Explained!
Episode 14: Mastering Integration Settings - Part 3
Переглядів 703 місяці тому
Episode 14: Mastering Integration Settings - Part 3
Episode 13: Deep Dive into Integration Settings - Part 2
Переглядів 523 місяці тому
Episode 13: Deep Dive into Integration Settings - Part 2
Episode 12: Integration Settings in Microsoft Fabric - Part 1
Переглядів 1043 місяці тому
Episode 12: Integration Settings in Microsoft Fabric - Part 1
😅
Interesting demo. Was interested to learn about how the data sources were being reference and use between the Bronze, Silver and Gold layers.
Measures created in the datamart won't be available in the power bi desktop if connect via import mode.
I'm just here for the battle 😉
Lets GO!!!
I just did it yesterday and used it to authenticate in a Copy Activity in a Data Pipeline from a Sharepoint Online List, but didn't work. Error: invalid credentials 😞 On the Group members, great advice, have to be member not only owner in the Group. 👍
How do i embed my powerbi dashboard in a company intranet
Q Thank you Chris for this wonderful setup of Azure log analytics with power bi, can I customise my Power BI dashboard in the sense take only part of data from Log Analytics and create dashboard? Thank you very much
If you are using shared datasets as source, you would probably need them to create dimensions, groups not in source datasets.
The logic for the shared dim can still move up to either power query or one of the data loads.
@@ChrisWagnerDatagod, this is not always possible when upstream datasets are shared for self service reports. if only one team, we always try to move it to ETL instead of Power query too.
Hi, cant wait for the proof why. I know there are problems if using big tables where there can be better compression. However I would like to see difference in using date table or some dimension. I did not see big differences in that
20 reasons to NOT use DAX columns is on the way.
Isn’t the data already imported? When you schedule the refresh does it have to move data from the data mart to the dataset? I’m hoping I can build one data mart and import everything one time and not have to import it again into different datasets. Ultimate I need 4 different datasets. But they all use a majority of the same tables. I don’t want to refresh 4 datasets when I’m just importing the same data 4 times.
Could you do a video about migrating to an F64 from a P1? Extra credit if they’re in different regions. :)
Thank you all, that was a great session. So useful for the teams.
Thanks Chris, I'm trying to migrate a legacy SSAS setup over to Azure and this is a possible solution. I almost missed the key part of this - tabular editor and it's ability to edit/read a SSAS data source and then export to a AAS data set - the powerbi bit is optional IMO.
I would highly recommend PBI Premium vs AAS at this point. AAS is in the legacy category of solutions at this point.
bro its fake tho
That cut at -60º below zero, probably couldnt keep a straight face lol
Thanks . Can you share the link ?
Here you go! github.com/microsoft/semantic-link-labs/releases/tag/0.5.0 It's in the description, but idk about you, but I find the descriptions much harder to access and use on the Shorts.
thank you so much
😂😂😂😂😂😂
Would this feature help if we have no users? Meaning will this feature help with big ugly refreshes that timeout on occasion?
Not really. I could be wrong, but I would wager this would not help. Sorry. Look at incremental refreshes, or even programmatic table by table, partition by partition refreshes.
Congratulations!
Thanks for this, my kids think I am famous now that I was mentioned in a youtube video. I did some more digging including asking David Browne on the CAT team. Here is a summary of whats going on with scale out:, from a technical perspective, behind the scenes. Well at least as much as Microsoft was willing to say. I do find it interesting that there are non-intuitive features in Fabric, that Microsoft won't explain how they work because they are considered proprietry. It comes down to "Trust me". Things are definitely getting complex in Fabric world. Capacity Units (CU) and V-cores: -CU is directly related to v-cores and the size of a premium capacity. -CU is limited and additive within a single capacity. -CU is primarily tied to CPU usage and is shared across the entire capacity. Nodes and Resources: -A capacity is made up of one or more nodes (essentially VMs). -Each node has its own allocated resources, including CPU, memory (RAM), and storage. -While CU is additive across the entire capacity, other resources like storage and some aspects of RAM are specific to each node. Semantic Model Hosting: -Conceptually, each semantic model (dataset) is hosted on a different Analysis Services server with a limited amount of RAM (think docker container or Virtual machine) -This is an approximation; the actual implementation differs but aims to behave similarly. Scale-out Feature: -This feature operates within a single capacity, not by creating new capacities. -It doesn't increase total CU or overall capacity utilization. -Instead, it separates operations (like refresh and query) onto different nodes within the capacity. Benefits of Scale-out: -Reduces resource contention, especially for I/O and memory-intensive operations. -Allows a semantic model to use more resources than available on a single node/server. -Each operation (refresh/query) can utilize the full storage and RAM resources of its respective node without competing. Impact on Performance: -While it doesn't reduce overall CU usage, it can significantly improve performance by reducing bottlenecks (such as I/O rather than CPU). -This is especially beneficial for large models or when dealing with resource-intensive operations. RAM Considerations: -There's still some complexity around how RAM fits in as a "resource". Sounds like this undocumented stuff is the "secret sauce" behind the scenes of Power BI. -While total RAM is capped at the capacity level (e.g., 50 GB for a P2), the scale-out feature allows for more efficient use of available RAM across nodes. -The exact implementation of RAM allocation and usage is not fully documented and may be more nuanced than a simple per-node allocation (again secret sauce). Conceptual Model vs. Actual Implementation: - The conceptual model (separate SSAS servers for each dataset) helps understand the behavior, but the actual implementation is more complex and not fully documented.
Never a DAX calculated column! 💯💯💯
That's RIGHT! NO DAX Columns!!!
This is affirming to hear! My journey into data took me from a CRM to a no-code GUI to query our database, to Power BI, Power Query, and now SQL. Next will be Python and then who knows?!
Love that journey. Life is quite amazing!
Like your videos, how can we load table in dbo schema in fabric. Table is available but not able to query due to missing in dbo schema. can you help me
Learning basic SQL even helps with concepts in Power Query and other languages. You may not write SQL, but the concepts are very transferable.
I agree with this. SQL has helped my DAX and vice versa.
Power Query of course! 😊 or start with PQ and move up to more code it you need to.
Yes. I agree with this. If PQ meets your needs, stay there. BUT, the second you start to run into limitations or issues, you need to start planning for SQL or Python.
Almost getting boring
HALLELUJAH! Sir, this is the SINGLE most helpful tutorial I have found for autorefresh. I have struggled with this for a solid three months now battling with the gateways. THANK YOU!
Glad it helped!
#BDE #ftw
Can I create a custom sunburst visual from this?
I would look at Deneb. Deneb is a more friendly custom build.
Great video! One of my main concerns recently has been migrating from my existing semantic models using import mode to new one's using directlake. I was worried I would need to rewrite all my relationships, measures, roles, etc. Looks like Tabular Editor could save me the headache.
Oh 100% Tabular editor makes it MUCH easier to do the migration. It's not point and click, but it's leagues better than what I feared as well.
Excellent video, thank you. The issues we are having seem to be related to the limitations of service principal when using Fabric API. For example, we can run GET .../v1/workspaces/{id}/items/{id} no problem but then trying to list tables in a Lakehouse (GET .../v1/workspaces/{id}/lakehouses/{id}/tables) we get "The operation is not supported for the principal type" message. Do you have any experience with that issue?
This specific issue? No. This type of issue? Yes. When a service principal doesn't directly allow for access, we use the service principal Entea ID group for access. This typically works, but I am not sure it works in every situation.
@@ChrisWagnerDatagod We use the group too, but I think I have identified the reason. Service Principal in not (properly) supported in Fabric API. If you search for idea e5ec086c-f1ce-ee11-92bd-6045bdbce644 (Fabric API - Service Principal Support) in "Fabric Ideas" you will find the relevant information.
Thanks !
Welcome!
Amazing 42 minutes of my time! Thank you for putting this video up. I am also struggling in trying to understand on how to integrate all the latest features in PBI. This end to end project of yours is just an eye opener on what MS Fabric can do.
So glad that you enjoyed it and found it beneficial. Stay tuned for more!
I'll give it a go thanks
Let me know how it works for you.
You are trolling Satya come on he is great for Microsoft
:) a little bit. hahaha
RE gIt integration. I just want a few buttons in Desktop..clone commit, pull, merge etc. Current implementation pathway feels way too convoluted..
That would be quite nice.
Like so many Fabric features, I don't understand how this feature works. As I interprete it, this "scale out " is happening on a single capacity. But a single capacity has a set amount of CPU (CU) resources. This is measured as Interactive CU- as it is linked to DAX query execution. Interactive CU is additive within a smoothing window, so if one DAX query consumes 10% of a capacity, two similar queries would consume 20% of the capacity. So I don't see how this feature actually reduces capacity utilization, and potential throttling. Can anyone explain the mechanics of how this is working, or where my logic is incorrect.
Great question. I had to create a video to answer this. ua-cam.com/video/lk4bmBQ-FXM/v-deo.html It goes live on Monday 6/24/2024 at 7am Central.
Good one
Thanks
"It's the end of the trial as I know it, and I feel fine" -- REMish
Ha! Love it.
Lovely. I prefer XP.
So do I
Is there a way to have incremental refresh on dataflow Gen2? I'm using dataflow gen2s to load tables in the warehouse but can't find a way to incrementally refresh them
A second question. With dataflows would one still require Analysis Services?
If in a small organization there is a single data analyst publish reports, would the old method of creating reports using Power BI Desktop be better than using dataflows. I guess once you have teams this is where dataflows shine. Awesome explanation. Thank you
There are a few factors I would consider when using Dataflows / Datamarts. 1. Team size-multiple team members can work more easily together using common assets. 2. Simplicity-when you manage a large number of assets, it often becomes more accessible to manage one load of a data flow vs. loading the same table into 100 datasets. 3. Performance - loading 1 BIG table in a data flow can be much easier than loading it 100 times. 4. Consistency - even if the table is small, 100 loads of the same object will happen at different times, have a failure %, and all of that has to be managed.
@@ChrisWagnerDatagod Thank you for this explanation.
Is the only way to refresh the data by clicking refresh it would not refresh automatically like power bi does?
It's a pity this is only for Premium workspaces - also is there a way to I.D. users that have downloaded a semantic model from the service?
BDE!!!!! Congrats.
Congratulations!! 🎉
I actually do like them but for a specific purpose: Scatter Plots
my usecase is that power bi needs to access a restapi with currently a separate token required but that expires so it needs to be resetted manually. Could a service principle be used to do the refresh for the dataset via rest api? Note the restapi can use entra id as identification. I am not sure whether that is possible and if so how to set it up. Any ideas or hints here? Thx and br
It's possible that this would work. I would test it on the API to see. I would go 50:50 if this works.