Thanks for the video! Would you always choose dimensional modeling when the goal is to support reporting and analytics? I ran into a situation where the analysts would rather use Salesforce's source data model with some minimal denormalization than its star schema version. They believe it would actually introduce MORE joins for them as one table - e.g. Opportunity - is now split into a fact and a dimension table so they don't see the benefit. How would you go about demonstrating the benefits for the end users in such a case? Thank you!
It is not always necessary to use a Star Schema but usually a better option than an OLTP model. If the purpose is to use the data by Power BI then a Star Schema is highly recommended by Microsoft. I'm not clear on Tableau. However, without a star schema, you can't easily support dimension history, i.e., SCD 2. So if a store changes from one region to another, all data related to the store is updated at that point in time which means reports a year ago don't match new reports year over year and regional sales comparisons will not longer be valid. If the business does not care about that and are Ok with a changing snapshot of the data, then the next thing is to look at the complexity of the model. If it's just a few tables, it may not matter but if you need to join many tables and nest joins like Opportunity to Order to Product to Product Category, then it will likely not perform well. Best bet is to test both models against what you expect the primary queries to be like and see how they hold up. It comes down to what delivers the most value to the business.
Love this! With the fact tables that have the numeric values. Would these values be calculated in the transformation process in a data pipeline? For example data bricks?
Generally, no. Facts usually are things like the sales amount, tax amount, etc., so the things you want to aggregate. But sometimes you want to create a calculated column like profit = salesamount-cost just to make it easier for reporting. This can be done in the ETL pipeline.
Hello, thank you for the video, very informative. Im just wondering isnt there a risk of table deadlock since the table FactResellerSaler is always the entrypoint for most of the queries ? Thank you for your response :).
Good question. Only possible during table loads but that should be done one table at a time to avoid the issue. Main thing is the Fact table must be updated after the dimension tables. Reading the tables will not cause deadlocks.
It can be tricky using mouse pointers in the recordings. I've experimented with some. I usually animate slide bullet points so you focus on the current bullet point. Which specific parts of the presentation were you confused by?
I made a mistake in an interview today and confused the star schema with the 3 Normal Forms. I also stated star schema was normalization when it was denormalized...oh well.
The important part of the star schema, the fact table, IS in third normal form. Only the dimensions are 2NF for simplicity and speed. Pure 3NF is not as important in a reporting database as it is an a transactional database. I like to say that a star schema is a reorganization, not a denormalization. The more important “denormalization“ is header detail tables into one table.
Snowflake doesn't enforce Keys, they are only informative. That forces devs and architects to handle data quality on their end thus dim model might be daunting to implement, still doable :)
I watched and took notes from your older video on dimensional modeling. Best explanation I’ve watched! Looking forward to watching this series too
Thanks!
As a 21 Year Old struggeling with these stuff, because we usually dont really get told what to exactly learn, I really appreciate your videos !
Thank You!
Thanks a lot Bryan. This is a great info and hard to find it so please continue the series..
Thanks Bryan, I'm looking forward to your series. Take care.
Thanks. You too.
Love your explanations and thank you so much for sharing your expertise Bryan! You're a gem of a guy and I find your channel to be super valuable : )
Thank You!
Very excited about this series
This is exactly we need.
Great Material !
Thanks a lot.
You're Welcome!
loved it and enjoy ur voice modulation :)
Thanks for the video! Would you always choose dimensional modeling when the goal is to support reporting and analytics? I ran into a situation where the analysts would rather use Salesforce's source data model with some minimal denormalization than its star schema version. They believe it would actually introduce MORE joins for them as one table - e.g. Opportunity - is now split into a fact and a dimension table so they don't see the benefit. How would you go about demonstrating the benefits for the end users in such a case? Thank you!
It is not always necessary to use a Star Schema but usually a better option than an OLTP model. If the purpose is to use the data by Power BI then a Star Schema is highly recommended by Microsoft. I'm not clear on Tableau. However, without a star schema, you can't easily support dimension history, i.e., SCD 2. So if a store changes from one region to another, all data related to the store is updated at that point in time which means reports a year ago don't match new reports year over year and regional sales comparisons will not longer be valid.
If the business does not care about that and are Ok with a changing snapshot of the data, then the next thing is to look at the complexity of the model. If it's just a few tables, it may not matter but if you need to join many tables and nest joins like Opportunity to Order to Product to Product Category, then it will likely not perform well.
Best bet is to test both models against what you expect the primary queries to be like and see how they hold up. It comes down to what delivers the most value to the business.
Love this! With the fact tables that have the numeric values. Would these values be calculated in the transformation process in a data pipeline? For example data bricks?
Generally, no. Facts usually are things like the sales amount, tax amount, etc., so the things you want to aggregate. But sometimes you want to create a calculated column like profit = salesamount-cost just to make it easier for reporting. This can be done in the ETL pipeline.
@@BryanCafferky thank you very much for your reply that makes sense! ☺️
@@coffeemusiccoding You're welcome!
Yes!
Hello, thank you for the video, very informative. Im just wondering isnt there a risk of table deadlock since the table FactResellerSaler is always the entrypoint for most of the queries ? Thank you for your response :).
Good question. Only possible during table loads but that should be done one table at a time to avoid the issue. Main thing is the Fact table must be updated after the dimension tables. Reading the tables will not cause deadlocks.
When you are describing what's on the screen, can you please use the mouse to point to what you are referring to please.
It can be tricky using mouse pointers in the recordings. I've experimented with some. I usually animate slide bullet points so you focus on the current bullet point. Which specific parts of the presentation were you confused by?
I made a mistake in an interview today and confused the star schema with the 3 Normal Forms. I also stated star schema was normalization when it was denormalized...oh well.
The important part of the star schema, the fact table, IS in third normal form. Only the dimensions are 2NF for simplicity and speed. Pure 3NF is not as important in a reporting database as it is an a transactional database. I like to say that a star schema is a reorganization, not a denormalization. The more important “denormalization“ is header detail tables into one table.
@@Milhouse77BS Thank you for clearing this up for me!
What position did you apply for?
@@mindyourown6320 Sr Healthcare Data Analyst. Just did a 2nd interview with them today.
Snowflake doesn't enforce Keys, they are only informative.
That forces devs and architects to handle data quality on their end thus dim model might be daunting to implement, still doable :)