Which Microsoft Fabric SQL Analytics Endpoint should you use?

Поділитися
Вставка
  • Опубліковано 30 вер 2024
  • Both the Microsoft Fabric lakehouse and warehouse have a SQL Analytics Endpoint. But they have different limitations. Which one is the right one to use? Patrick breaks it down!
    Better together: the lakehouse and warehouse
    learn.microsof...
    📢 Become a member: guyinacu.be/me...
    *******************
    Want to take your Power BI skills to the next level? We have training courses available to help you with your journey.
    🎓 Guy in a Cube courses: guyinacu.be/co...
    *******************
    LET'S CONNECT!
    *******************
    -- / guyinacube
    -- / awsaxton
    -- / patrickdba
    -- / guyinacube
    -- / guyinacube
    -- guyinacube.com
    **Gear**
    🛠 Check out my Tools page - guyinacube.com...
    #MicrosoftFabric #SQLEndpoint #GuyInACube

КОМЕНТАРІ • 31

  • @roberttyler2861
    @roberttyler2861 Місяць тому +5

    I don't know why, but when i view this tool vs. databricks my mind just cannot get used to the UI/UX (and get confused).

  • @PalantirM
    @PalantirM Місяць тому +10

    Still missing the most important question: why choose a Datawarehouse or a Lakehouse? If you can do both the same things in both artifacts, and there are no performance differences…this makes a lot of confusion in our clients

    • @titoprakasa513
      @titoprakasa513 Місяць тому +1

      @@PalantirM data warehouse is relational meanwhile Lakehouse can store unstructured data.
      If you want to use spark to transform the data then Lakehouse is better option.

    • @PalantirM
      @PalantirM Місяць тому +2

      @@titoprakasa513 ​​⁠structured/unstructured is most for pre-processing data. And most end user, business analyst, won’t use spark. Spark is for data engineering and data scientist, not for providing timely decision reports to C-levels.
      In the end, we need to address what the final users want. And if you don’t have unstructured data (or already managed in other way), there is no discrimination for end users between lakehouse and warehouse

    • @inkuban
      @inkuban Місяць тому

      Lake house is relatively cheaper and less resources hungry. The nature of Lake house is better oriented for analytics because of their column orientedness.

    • @PalantirM
      @PalantirM Місяць тому

      @@inkuban do you have any evidence on what you say, relative to Fabric? Because what you’re writing is general about lakehouses.
      But Fabric is always on, and that’s what you pay. If you use a Fabric Lakehouse or a Fabric Warehouse you don’t have any different cost. If you want to pay less, you need to stop the capacity (which is very hard).

    • @JetJockey87
      @JetJockey87 Місяць тому

      @@PalantirM fabric does not have to be always on. Can easily make a logic app in your azure tenant to switch it off during downtime.

  • @thomasivarsson1291
    @thomasivarsson1291 Місяць тому +3

    That brings some light on the question of using SQL/TSQL in Fabric. Still I don't know why you should use an expensive MPP platform to be able to use TSQL.This is a weak point in Fabric.

  • @kevindoherty8788
    @kevindoherty8788 Місяць тому +1

    Love GIAC, but why do we need so much complexity in Fabric. Two different endpoints that have different functionality. It shouldn't be like this.

  • @sandeepbarge4699
    @sandeepbarge4699 Місяць тому

    I am creating notebook that runs REST API to read Fabric Audit Log and gets JSON results that I want to write to Data Warehouse. Couple of sites said that spark dataframe can't write to Warehouse. Is it true? If yes, how do I write to Data Warehouse? Is there any alternative way? I can't use Spark SQL because it can't run REST API.

  • @violatorwashere
    @violatorwashere 27 днів тому

    Q: Hi Patrick, can you make a video about a specific challenge. Imagine an excel file, which connects to two different SQL server, and use like 3 tables. In excel only a handful of information arrives and always from Today(), it's narrowed down to load only few columns and rows. This means that in the morning if I open the file it's empty (because it is set to refresh itself upon opening), but during the day it gathers information and at the end of the working day, from those lines it sends out automated emails with the help of a Power Automate flow. Now the file is stored in Sharepoint. The challenge is the I don't want to open the file wait for 10 seconds and close it. I have created a power automate flow which theoretically should refesh the file ( I have included a script in the excel file and that I trigger) but nope, it doesn't work. Some sort of conflict appears. In this way I have to keep in mind to open the excel file every day, and yes you have guessed it also on Saturday, because I won't empty itself and the flow runs every day once. Now I have searched for similar case information, video, something on the internet and I didn't find the solution. Maybe you have a great idea. Love to hear from you back. Thanks. Tibor

  • @tejaswijavvadi4046
    @tejaswijavvadi4046 Місяць тому

    Q : Hi Patrick, can you make a video on how to improve the end user self serve capabilities in power bi or what are the different ways of implementing self serve capabilities

  • @TonyHoechbauer
    @TonyHoechbauer Місяць тому +1

    Love the little pop-up factoid defining DDL. Reminded me of popup videos from VH1.

  • @L_MarX_L
    @L_MarX_L Місяць тому +2

    Q, What would you recommend if I need to create a paginated report in Report Builder?

    • @brianengelbrechtandersen9435
      @brianengelbrechtandersen9435 Місяць тому +1

      Not author but, your source can be both Lakehoues and Data Warehouse. The questions could be, how should i design my data platform else you can use many different sources. In my case i build Lakehouse for Bronze and silver, while i for gold layer use Data Watehouse. In this case, paginated reports could connect both to the Lakehouse in silver or data warehouse in gold depending on your data.

    • @L_MarX_L
      @L_MarX_L Місяць тому

      @@brianengelbrechtandersen9435 Thank you! I have a Paginated Report connected to SSMS, which initially took about 12 minutes to load 740,000 rows. To improve performance, I moved the data to a Fabric LakeHouse, reducing the load time to 9 minutes. Next, I plan to create a Data Warehouse and run further tests.

  • @bharpurdahiya3489
    @bharpurdahiya3489 Місяць тому

    Q, can we subscribe power bi report in pdf without hidden pages.

  • @GambillDataEngineering
    @GambillDataEngineering Місяць тому

    Q: Is it still like 5k per month as a entry point cost to have a fabric resource?

  • @gauthamanmt
    @gauthamanmt 7 днів тому

    Excellent video got the concept 🎉

  • @Krumelur
    @Krumelur Місяць тому

    But can I _INSERT_ data into a table I created using Spark when connecting to its SQL endpoint using Azure Data Studio or alike?

  • @davidmendez3997
    @davidmendez3997 Місяць тому

    Q: What do you recommend to create relationships to create a data model without using Datamarts? The idea is to connect to the data model via direct query in Power BI

  • @smilekp1
    @smilekp1 Місяць тому

    I am currently using for SQL end point for one of client. Create a view and few measures as well for reporting in service. It is showing one warning that if we create the Dax on view then connection mode will switch to Direct from Live. I m not clear why and how to avoid.

  • @AbelGarcia-ki5nd
    @AbelGarcia-ki5nd Місяць тому

    I really like how you tease each other guys, good video!

  • @ramoasutik6770
    @ramoasutik6770 Місяць тому

    I understand you may need more details, but could you advise me on whether it's better to start with SQL or Python as a data analyst?

    • @AndrewLe-me4kx
      @AndrewLe-me4kx Місяць тому +2

      if u are just starting out and have little to no experience in coding, start with SQL first

    • @ramoasutik6770
      @ramoasutik6770 Місяць тому

      @@AndrewLe-me4kx Thank you. 👍🏻