I Tried to Query 10 MILLION Rows in Postgres in 1 Second

Поділитися
Вставка
  • Опубліковано 11 січ 2025

КОМЕНТАРІ • 12

  • @ahmad-murery
    @ahmad-murery 2 місяці тому +1

    I like query optimizing related videos,
    The first thing I usually try is to add indexes to all columns in WHERE/ORDER BY/ GROUP BY clauses.
    Thanks Ben!

    • @DatabaseStar
      @DatabaseStar  Місяць тому +1

      That's a pretty good place to start!

  • @maycj
    @maycj 2 місяці тому +1

    you could run a CTE which adds the reason column and then filter on this column

  • @Ostap1974
    @Ostap1974 Місяць тому +1

    Nice that you share the different ideas and trials to get to the result. I personally find the data consolidation a bit of cheating in specific case. It feels not too far from creating temp table and filling it with needed records and getting millisec response times after that :)

    • @DatabaseStar
      @DatabaseStar  Місяць тому

      Yeah that's a fair point about the data consolidation and it's a bit like cheating. I could have avoided this and I probably would have gotten a response time around 5s - 10s, which is above the 1s goal.

  • @Yuusou1980
    @Yuusou1980 18 годин тому

    I wouldn't call it query optimization if you just move all time-consuming operations to Update and then directly select rows with a single condition.

  • @SadSadDeadM
    @SadSadDeadM 2 місяці тому

    I assume that adding the "reason" column isn't a good option if the business logic (definition of rules) changes in time. Is that correct?

  • @yoskokleng3658
    @yoskokleng3658 Місяць тому

    so, it mean add all column to index except primary key to improve speed of query?

    • @DatabaseStar
      @DatabaseStar  Місяць тому

      You could try that. I tried some indexes on other columns which helped a little.