Scan speed is extremely important when the data set is huge and it cannot all fit in memory. On a large warehouse, the time spent scanning will usually dwarf the compute time on queries. So I agree that on a tiny 100GB benchmark, complex queries are more meaningful, but on a larger size warehouse scan speed and re-distribution speed are the differentiator.
Excellent video. I really like the detailed approach to pricing calculations. (20:00 onwards) . E.g. BigQuery being actually more expensive that what is appears to be.
I first used Sybase IQ in 1996. It was a hugely successful implementation. I would say this was the first columnar DB which stemmed from an MIT group if I recall.
I think it's better to use Insert, Update and Delete architectural optimization vs. Query (Select) optimization for OLTP vs. OLAP. The select example you gave seems to be more of a difference between operational reports vs. analytical reports. BUT - Good stuff!
I don't want to throw a spanner in the works but ... why remove the best performing aspects of a data warehouse in order to perform a benchmark test? Removal of distribution, clustering, sort/partition keys doesn't in my opinion present a usable test because you removed the best and most important parts. Data can and should be be distributed and redistributed as copies in a warehouse. Re-sorting/restructuring has been used for variable data requirements for decades, and the most effective way is to create multiple copies (can also be a materialized view). Isn't disk space cheap relative to CPU+RAM? And will a complex data model (no indexing) cause problems, coupled with filtering and no partitioning or distribution? And why test with a small data set on a platform that is built for very large data sets?
I'm going to guess that these data warehouses are becoming so broadly available and cheap that they're edging out traditional data storage platforms, and are becoming more frequently used by smaller organizations. So a benchmark like this, while not necessarily helpful for large companies that would fully leverage the capabilities of a cloud storage architecture, is still extremely useful for a larger number of small companies looking to use agile storage services at a competitive price.
@@Chekmate99 Good! BigQuery integrates well with GCP products, but nowadays the best value for money Data Warehouse is Snowflake. If you want to do some crazy ML stuffs BigQuery could be the way to go, otherwise Snowflake is much better.
Yuri Soares thanks! We are also looking at Snowflake and Azure solutions. Our situation is similar to this video’s case study, we are consolidating data from several key OLTP systems into a warehouse. Years ago we used Cisco’s Data Virtualization tool to accomplish something similar but now we want to leverage the Cloud. Biggest challenge we’ve had in the past was getting the business user community onboard and use these solutions (get away from excel spreadsheets, etc.)
Sybase IQ appeared as a column-store database in 90s and is still in use today. Yet sadly nobody knows about it. Both Sybase and SAP (who acquired Sybase) didn't bother to market it.
I loved Sybase as a customer and employee. But we could not market our way way out of a paper bag. In 1996 Oracle was 3 times our size in revenue but In new license sales we were nearly even. We were the 6th largest independent software company in the world. Oracle was 6th. Traveling on a plane I often had the following conversation (I like to talk to people). “....I work for Sybase a major database company.” Other passenger ‘I never heard of them, but I don’t know anything about technology”. Me “I bet you a dollar you heard of my competitor Oracle” them”oh, yes I have”
Good Comparison, very informative. However, I don't believe that this CEO have in-depth-knowledge of each technology to answer questions similar to that @34:27 min
Scan speed is extremely important when the data set is huge and it cannot all fit in memory. On a large warehouse, the time spent scanning will usually dwarf the compute time on queries. So I agree that on a tiny 100GB benchmark, complex queries are more meaningful, but on a larger size warehouse scan speed and re-distribution speed are the differentiator.
While most comparisons only focus on speed or cost, you covered a number of parameters in detail. Thanks for sharing.
Good presentation. Also nice to see that Jimmi Simpson is expanding his horizons.
Excellent video. I really like the detailed approach to pricing calculations. (20:00 onwards) . E.g. BigQuery being actually more expensive that what is appears to be.
Thank you very much for this presentation. It was very well done and I appreciate the explanation of your choices.
I first used Sybase IQ in 1996. It was a hugely successful implementation. I would say this was the first columnar DB which stemmed from an MIT group if I recall.
I joined Sybase in 1992 having been a Sybase customer since 1988.
How about Databricks? Or using SparkSQL to query data stored in parquet file either stored in HDFS or in S3 via a connector?
Great comparison & presentation!.
I think it's better to use Insert, Update and Delete architectural optimization vs. Query (Select) optimization for OLTP vs. OLAP. The select example you gave seems to be more of a difference between operational reports vs. analytical reports. BUT - Good stuff!
Next time you do a benchmark testing , please do include Teradata as well.
OLTP vs OLAP @2:30 👍
The latest version of our warehouse benchmark is at fivetran.com/blog/warehouse-benchmark
Partitioning is a huge part of Snowflake's architectural magic... Isn't that a silly thing to restrict from the benchmark testing??
Is he the same person with Fivetran etl company?
I don't want to throw a spanner in the works but ... why remove the best performing aspects of a data warehouse in order to perform a benchmark test? Removal of distribution, clustering, sort/partition keys doesn't in my opinion present a usable test because you removed the best and most important parts. Data can and should be be distributed and redistributed as copies in a warehouse. Re-sorting/restructuring has been used for variable data requirements for decades, and the most effective way is to create multiple copies (can also be a materialized view). Isn't disk space cheap relative to CPU+RAM? And will a complex data model (no indexing) cause problems, coupled with filtering and no partitioning or distribution? And why test with a small data set on a platform that is built for very large data sets?
I'm going to guess that these data warehouses are becoming so broadly available and cheap that they're edging out traditional data storage platforms, and are becoming more frequently used by smaller organizations. So a benchmark like this, while not necessarily helpful for large companies that would fully leverage the capabilities of a cloud storage architecture, is still extremely useful for a larger number of small companies looking to use agile storage services at a competitive price.
legends are still waiting to get the presentation on their email id one day after registering on the link.
another big cloud data warehouse provider is Alibaba Cloud MaxCompute, are we going to involve this product
Great presentation
The really big problem of BigQuery is data governance. Permissions in BigQuery are horrible, BigQuery only have dataset permission granularity.
Is this still the case or has BigQuery security improved since last year? Thx
Yes a lot changed from last year, there is already ACL for BigQuery in beta
Yuri Soares thanks will research this as we are considering BigQuery
@@Chekmate99 Good! BigQuery integrates well with GCP products, but nowadays the best value for money Data Warehouse is Snowflake. If you want to do some crazy ML stuffs BigQuery could be the way to go, otherwise Snowflake is much better.
Yuri Soares thanks! We are also looking at Snowflake and Azure solutions. Our situation is similar to this video’s case study, we are consolidating data from several key OLTP systems into a warehouse. Years ago we used Cisco’s Data Virtualization tool to accomplish something similar but now we want to leverage the Cloud. Biggest challenge we’ve had in the past was getting the business user community onboard and use these solutions (get away from excel spreadsheets, etc.)
Wonderful video. It should have azure as well.
Nicely presented.
Excellent video. Please include SQL Warehouse (Azure Synapse Analytics)
Great, great video!
Sybase IQ appeared as a column-store database in 90s and is still in use today. Yet sadly nobody knows about it. Both Sybase and SAP (who acquired Sybase) didn't bother to market it.
That's really interesting - how did you first encounter it? I'd never heard of it but will be checking it out.
Richard Mei cc
I loved Sybase as a customer and employee. But we could not market our way way out of a paper bag. In 1996 Oracle was 3 times our size in revenue but In new license sales we were nearly even. We were the 6th largest independent software company in the world. Oracle was 6th. Traveling on a plane I often had the following conversation (I like to talk to people). “....I work for Sybase a major database company.” Other passenger ‘I never heard of them, but I don’t know anything about technology”. Me “I bet you a dollar you heard of my competitor Oracle” them”oh, yes I have”
Why no Azure SQL Date Warehouse?
It's in the latest version: fivetran.com/blog/warehouse-benchmark
Great comoparsion, thank!
Good Comparison, very informative.
However, I don't believe that this CEO have in-depth-knowledge of each technology to answer questions similar to that @34:27 min
I think bigquery is better for me
nice talk!
"My instinct is that in general ahhhh..." Really? If you don't know for a fact you should just admit that you don't know. Period.
1 minute 30 seconds to get in 20 "uh"s. You failed the "Um" game.
Poor guy is nervous
ahh ... like .. ahhh ... like .... way way way.. ahh ... like ... 28 min. Content 6 min.
Agreed. Don't want to listen through. Who won?
Troll