Thanks for all of the comments everyone! I'm glad a couple of issues have been pointed out. I'll address them when I redo the process in a follow-up to this video, including: - use SQL Server Developer Edition rather than Express to avoid the database size limit and import all 1B rows - use another version of Oracle to also avoid the database size limit and import all 1B rows - consider optimising the queries or changing some parameters to further improve the performance of the SELECT
If doing this with 250 million rows and multiplying by 4 was a correct approach, then for all databases you could just use 1000 rows and multiply by a million.
Perhaps consider using the Developer Edition, as it is equivalent to the Enterprise Edition. Since the project is not intended for production purposes, there should be no issue in utilizing it.
@@jpsolares We all know it. What I am saying is, you can't test something doing with 250 million rows and multiplying by 4 and then claim it is equal to doing with 1 billion rows. Looks like you understood as if I were asking how to do it.
Good point! The only reason I used 250m for Oracle and SQL Server was because of the size limitations mentioned in the video. In hindsight (and from the other comments), I could have used the SQL Server Developer edition to load the full 1 billion rows. I could also have used a different edition for Oracle just for this experiment.
@@DatabaseStar No problem. I later read about it a bit, and it was indeed a challenge where you could use any kind of optimization you can. In my view, that made your SQL server and postgreSQL samples void for me. I don't use mySQL and Oracle, but at least I know your sample is void for mySQL as well. You are doing the uploads and querying just how a beginner would do as well. No tricks, no optimizations, no use of extensions, CLR etc. I could create better versions for postgreSQL and SQL server but only when I have spare time and\or I think it is worth doing it.
Yes, that's true, I mention in the video I don't make any optimisations such as indexes or adjustments to the process. The default settings are used on purpose. There are definitely ways to make it faster in each database!
in SQL server, if you change the auto grow to 10,000 from 64 it will run much faster for both data and log files. Also SQL server has external tables as well.
Thanks for the tip, I wasn't aware of that! I left all of the databases at their default settings but this would have been a good one to change to avoid the issue.
No, I'm not pushing Oracle, and I left all of the settings as the defaults. I did an additional method for Oracle because I was aware of its ability to use an external table.
To solve the growth problem in SQL Server, when creating the database, you must click on the ... and choose the growth in percentages, this prevents you from having to indicate a value that may be short (that is, not establishing a growth limit) and thus you solve the problem of the size of the database
Oh I love Oracle! I've been using for years and fascinated with all its features, functions and properties. Its direct path loading feature is also awesome.
In a real world scenario, likely you wouldn't just keep that data as two columns city, temperature. Checking the original challenge, it is more about parsing data. Using plain streaming with languages (C series, Java, Go, Rust, ...) is expected to beat any database that way IMHO. They don't need to store all the values, just min, max and average (aka Sum and count).
Yeah the results of the Java challenges were impressive. One of the points of this video is that databases are great at processing data, so I wanted to see how the different databases handled it.
@@DatabaseStar Indirectly I started to dig what was that about, those results were not from a file on disk but from a RAM disk and the machine used is an AMD EPYC 7502P with 32 cores and 128 Gb memory! That CPU is supposedly more than 5 times better than yours and is a server CPU. Directly working from memory on a beast hardware those timings is not fair to compare with yours. Maybe should also run one or more of their code to see what they score on the same hardware. (Trying to allocate time for a postgreSQL and\or Go check)
Loading 250m rows 4 times is not equal to loading 1billion rows. The experiment is really flawed. You need to use 250m rows for all. looking forward to another test with same yardstick. Kudos!
It is interesting to see a real test for the deference in loading/select speed between DBs, and how out of the box Oracle numbers (select) are more than13x times faster than postgres and mysql I wander if there are way to tweaked some parameters in Postgres/Mysql (especially for Mysql) to decrease those number and make them comparable to Oracle. Thanks a lot for the demonstration, it is a grate video
Glad you liked the video! Yes I thought the test was interesting as I was doing it. I think there would be ways to tweak some parameters on the server to improve the import process. The default settings may not be the fastest. Also there are indexes that could have been added after the import which could have also improved the query performance.
It is only so, the way he did. I don't use mySQL but look at vitess project for example. AFAIK it is what made possible youtube to store their videos on mySQL. You may also check with PlanetScale.
concurrency is a critical factor (one of the many others) in the context of this scenario. based on my observations, SMP databases exhibit subpar performance when managing such workloads in a production setting. in contrast, MPP databases such as teradata, snowflake, and netezza offer a more suitable solution. a decade ago, I was involved in an oracle to teradata migration project, where we conducted parallel testing by executing identical queries on both database platforms. the disparities we encountered were stark and significant.
bcp, or load a mdb on a non-production server then copy the file and sp_attachdb it. you’ll need to stop the service, then attach it. I can think of a half dozen ways to do this, depending on business constraints.
No problem! I think it did, actually, based on a couple of other comments. In hindsight I should have shown all results at once in the query to get a more accurate number.
you should use 1. sqlserver developer edition 2. you should monitor memory usage and CPU usage also 3. you should use same client tools like dbbeaver, or minimum client like CLI version of each vendor good work!
Newbie here. Hope you can also make a video about optimizing this challenge for mysql and postgres. But i think that would be impractical since there would be better tools to use for this sort of thing
Just for curiosity, can you share your machine's specs? From create...sh timing it looks like you have a faster machine than mine but would be good to know.
Your Postgresql select benchmark was also not correct. It wasn't 27 minutes it was much lower . So the issue was the query already returned all the results. It was your dbeaver which was paginating the result
Oh thanks for pointing that out, that's good to know. I'll look at it again as it seemed like DBeaver took several minutes each time it loaded a new page.
What about comparing this to SQL serverless endpoint using an external table. This can be done in Synapse or Fabric. That would be fair compare to Oracle External table. Otherwise insightful. Thank you.
Great video. But it got me curious what about nosql databases? I know they are faster than sql but how much? Can you make a video for nosql databases? Like mongo, cassandra etc.
Good idea! In my research I couldn't find a way to read a text file directly using Postgres. Do you know how it could be done? I'm not trying to push Oracle at all. My preferred database to use is actually Postgres :)
Yeah I left out SQLite as well as many other databases as I'm not that familiar with them and don't teach them on this channel (DB2, MariaDB, and so on).
wow SQL server and oracle does not exactly cover themselves in glory here, "soptty import to big" does not cut it IMHO. and SQL server needs to step up it's game. I wonder why Mysql is that slow and will mariadb do better?
True, but I was using the "free" edition of their databases and they are commercial products. I'm not sure why MySQL was slower, perhaps it's not designed for this kind of data. Or maybe the defaults for MySQL are not great for this, and there are settings I can change to improve it.
I think because it's a more "premium" database and it's built for high-end performance. Or maybe the default settings are better for importing large sets of data compared to the others.
Or because it wasn’t a fair comparison. Both Postgres and SQL Server can directly read external files but you chose to copy the data on all but Oracle.
I couldn't find any documentation on reading a text file directly in Postgres, SQL Server, or MySQL, so I didn't include those in this video. I still included an "import file into table" method for Oracle to see how they would compare as well.
Thanks for all of the comments everyone! I'm glad a couple of issues have been pointed out. I'll address them when I redo the process in a follow-up to this video, including:
- use SQL Server Developer Edition rather than Express to avoid the database size limit and import all 1B rows
- use another version of Oracle to also avoid the database size limit and import all 1B rows
- consider optimising the queries or changing some parameters to further improve the performance of the SELECT
If doing this with 250 million rows and multiplying by 4 was a correct approach, then for all databases you could just use 1000 rows and multiply by a million.
Perhaps consider using the Developer Edition, as it is equivalent to the Enterprise Edition. Since the project is not intended for production purposes, there should be no issue in utilizing it.
@@jpsolares We all know it. What I am saying is, you can't test something doing with 250 million rows and multiplying by 4 and then claim it is equal to doing with 1 billion rows. Looks like you understood as if I were asking how to do it.
Good point! The only reason I used 250m for Oracle and SQL Server was because of the size limitations mentioned in the video. In hindsight (and from the other comments), I could have used the SQL Server Developer edition to load the full 1 billion rows. I could also have used a different edition for Oracle just for this experiment.
@@DatabaseStar No problem. I later read about it a bit, and it was indeed a challenge where you could use any kind of optimization you can. In my view, that made your SQL server and postgreSQL samples void for me. I don't use mySQL and Oracle, but at least I know your sample is void for mySQL as well. You are doing the uploads and querying just how a beginner would do as well. No tricks, no optimizations, no use of extensions, CLR etc.
I could create better versions for postgreSQL and SQL server but only when I have spare time and\or I think it is worth doing it.
Yes, that's true, I mention in the video I don't make any optimisations such as indexes or adjustments to the process. The default settings are used on purpose. There are definitely ways to make it faster in each database!
in SQL server, if you change the auto grow to 10,000 from 64 it will run much faster for both data and log files. Also SQL server has external tables as well.
As does Postgres, but he only used that feature with Oracle. Every other DB he copied the data. Totally not pushing Oracle…
Thanks for the tip, I wasn't aware of that! I left all of the databases at their default settings but this would have been a good one to change to avoid the issue.
No, I'm not pushing Oracle, and I left all of the settings as the defaults. I did an additional method for Oracle because I was aware of its ability to use an external table.
Time needed for a certain query doesn't grow linearly in respect to the size of data 6:00
Yes I completely agree. I mentioned in the video it's not a true comparison as I was unable to load the full 1B rows into Oracle and SQL Server.
To solve the growth problem in SQL Server, when creating the database, you must click on the ... and choose the growth in percentages, this prevents you from having to indicate a value that may be short (that is, not establishing a growth limit) and thus you solve the problem of the size of the database
Good tip, thanks for sharing!
You can freely and legally use Developer edition for an MS Sql Server. As long as there no production data loaded, the Dev edition is free.
Thanks for the tip! I should have used that edition, in hindsight, for a better comparison.
Oh I love Oracle! I've been using for years and fascinated with all its features, functions and properties. Its direct path loading feature is also awesome.
Yeah, it does have a lot of features.
You should extend your test for databases like: SAP HANA, IBM DB2 , Snowflake, Google BigQuery, MariaDB, SQLite
Good idea! I don't have any experience with those databases unfortunately.
@@DatabaseStar You can find free/community edition for SAP Hana (Express) and DB2 (community edition )
Please include sqlite
Good idea
In a real world scenario, likely you wouldn't just keep that data as two columns city, temperature. Checking the original challenge, it is more about parsing data. Using plain streaming with languages (C series, Java, Go, Rust, ...) is expected to beat any database that way IMHO. They don't need to store all the values, just min, max and average (aka Sum and count).
Yeah the results of the Java challenges were impressive. One of the points of this video is that databases are great at processing data, so I wanted to see how the different databases handled it.
@@DatabaseStar Indirectly I started to dig what was that about, those results were not from a file on disk but from a RAM disk and the machine used is an AMD EPYC 7502P with 32 cores and 128 Gb memory! That CPU is supposedly more than 5 times better than yours and is a server CPU.
Directly working from memory on a beast hardware those timings is not fair to compare with yours. Maybe should also run one or more of their code to see what they score on the same hardware.
(Trying to allocate time for a postgreSQL and\or Go check)
Loading 250m rows 4 times is not equal to loading 1billion rows. The experiment is really flawed. You need to use 250m rows for all. looking forward to another test with same yardstick. Kudos!
Thanks! Yeah a few people had pointed that out, and I mentioned it a couple of times in the video. I plan on doing a follow-up video for it.
It is interesting to see a real test for the deference in loading/select speed between DBs, and how out of the box Oracle numbers (select) are more than13x times faster than postgres and mysql
I wander if there are way to tweaked some parameters in Postgres/Mysql (especially for Mysql) to decrease those number and make them comparable to Oracle.
Thanks a lot for the demonstration, it is a grate video
Glad you liked the video! Yes I thought the test was interesting as I was doing it.
I think there would be ways to tweak some parameters on the server to improve the import process. The default settings may not be the fastest.
Also there are indexes that could have been added after the import which could have also improved the query performance.
It is only so, the way he did.
I don't use mySQL but look at vitess project for example. AFAIK it is what made possible youtube to store their videos on mySQL. You may also check with PlanetScale.
Have you used Clickhouse or DuckDB??
No, I haven't used them actually.
this sounds like a job for DuckDB 🦆🔥💪
Oh good to know!
One of the best channels on database
Thanks!
concurrency is a critical factor (one of the many others) in the context of this scenario. based on my observations, SMP databases exhibit subpar performance when managing such workloads in a production setting. in contrast, MPP databases such as teradata, snowflake, and netezza offer a more suitable solution. a decade ago, I was involved in an oracle to teradata migration project, where we conducted parallel testing by executing identical queries on both database platforms. the disparities we encountered were stark and significant.
Tell us more about it, please!
Good point, yes I imagine concurrency could impact the performance here.
bcp, or load a mdb on a non-production server then copy the file and sp_attachdb it. you’ll need to stop the service, then attach it. I can think of a half dozen ways to do this, depending on business constraints.
Good to know there are many other ways to do this
Interesting, thank you for sharing. For the postgres part, do you think the pagination negatively affected the results?
No problem! I think it did, actually, based on a couple of other comments. In hindsight I should have shown all results at once in the query to get a more accurate number.
Do DuckDB !!
Good idea
for sqlserver you can use developer edition
you should use
1. sqlserver developer edition
2. you should monitor memory usage and CPU usage also
3. you should use same client tools like dbbeaver, or minimum client like CLI version of each vendor
good work!
Thanks! Yeah a few other commenters have pointed that out, and in hindsight I should have done that.
Newbie here. Hope you can also make a video about optimizing this challenge for mysql and postgres. But i think that would be impractical since there would be better tools to use for this sort of thing
That's a good idea.
Just for curiosity, can you share your machine's specs? From create...sh timing it looks like you have a faster machine than mine but would be good to know.
Sure! Here are the specs:
Lenovo Ideapad 5 laptop
CPU: Intel i5 2.4 GHz
RAM: 16 GB
OS: Windows 11
You can get the Developer Edition of SQL Server, which is free
Thanks for sharing! In hindsight, I should have done this instead of using Express, because it would have been a fairer comparison.
that could change work times, greetings
Thanks!
GREAT CONTENT🔥
Thanks!
How about SQL*Loader in Oracle?
Good idea!
You should use SQL Server for developers edition, it's not limited
That's true! Others have mentioned it and I wasn't aware of that when I made the video.
Your Postgresql select benchmark was also not correct. It wasn't 27 minutes it was much lower . So the issue was the query already returned all the results. It was your dbeaver which was paginating the result
Oh thanks for pointing that out, that's good to know. I'll look at it again as it seemed like DBeaver took several minutes each time it loaded a new page.
So, what's your hardware configuration, just a laptop?
Yeah, it's my Windows laptop:
Lenovo Ideapad 5 laptop
CPU: Intel i5 2.4 GHz
RAM: 16 GB
OS: Windows 11
How fast would it be to load all the data into an in memory Database and query it?
in memory will only be a bit faster when loading, but up to 1000x faster when querying
Good question. I agree with what Nachtaktiv mentioned, the benefits would be when querying.
Much Helping
Thanks!
what if you index the city names, how fast would the queries be?
They would probably be quite a bit faster!
… some of the Oracle comparison-min-max-times are only for fetching 200 rows!!!
… and why not loading 500m rows and multipl. x2 ?
Good point! I tried to load 500m rows but that also exceeded the maximum size of the database (10 GB).
Now do the same with duckdb :)
Good idea!
What about comparing this to SQL serverless endpoint using an external table. This can be done in Synapse or Fabric. That would be fair compare to Oracle External table. Otherwise insightful. Thank you.
Good idea, I don't have any experience with those so I didn't include them
Why you didn’t use sql server developer edition??? to avoid the express problems
Good point! In hindsight I should have done this, and it would have been a better comparison.
Great video. But it got me curious what about nosql databases? I know they are faster than sql but how much? Can you make a video for nosql databases? Like mongo, cassandra etc.
Or just use unix tools efficiently directly and/or withinOrAsPipe to the generator script itself 😂
Thanks! They could be faster, but I don't have a lot of experience with NoSQL so I just focused on SQL for this video.
Clearly trying to push Oracle. Why not use Postgres to directly read the text file for a better comparison?
Good idea! In my research I couldn't find a way to read a text file directly using Postgres. Do you know how it could be done?
I'm not trying to push Oracle at all. My preferred database to use is actually Postgres :)
You forgot SQLite the most deployed SQL database in the world
Yeah I left out SQLite as well as many other databases as I'm not that familiar with them and don't teach them on this channel (DB2, MariaDB, and so on).
wow SQL server and oracle does not exactly cover themselves in glory here, "soptty import to big" does not cut it IMHO. and SQL server needs to step up it's game. I wonder why Mysql is that slow and will mariadb do better?
True, but I was using the "free" edition of their databases and they are commercial products.
I'm not sure why MySQL was slower, perhaps it's not designed for this kind of data. Or maybe the defaults for MySQL are not great for this, and there are settings I can change to improve it.
@@DatabaseStar right,I thought it was a bit strange, now it makes perfect sense, thanks for clarifying
okay than lets try mariadb
and than we give them challange under 9 minuts
Good idea
Very impress
Thanks!
how to do it in python?
I'm not sure, I don't have a lot of experience in Python. I assume you can write some code to read the CSV file and analyse it directly.
Awesome
Thanks!
perfect
Thanks!
due just keep everything the same
What do you mean?
Wow! Why is Oracle so fast?
I think because it's a more "premium" database and it's built for high-end performance. Or maybe the default settings are better for importing large sets of data compared to the others.
Or because it wasn’t a fair comparison. Both Postgres and SQL Server can directly read external files but you chose to copy the data on all but Oracle.
@@lucca5101 Interesting 🤔
I couldn't find any documentation on reading a text file directly in Postgres, SQL Server, or MySQL, so I didn't include those in this video. I still included an "import file into table" method for Oracle to see how they would compare as well.