HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

Optimizing "WHERE x BETWEEN a AND b GROUP BY y" query

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
groupwherequerybetweenoptimizingand

Problem

CREATE TABLE test_table
(
  id uuid NOT NULL,
  "RefId" uuid NOT NULL,
  "timestampCol" timestamp without time zone NOT NULL,
  "bigint1" bigint NOT NULL,
  "bigint2" bigint NOT NULL,
  "int1" integer NOT NULL,
  "int2" integer NOT NULL,
  "bigint3" bigint NOT NULL,
  "bigint4" bigint NOT NULL,
  "bigint5" bigint NOT NULL,
  "hugeText" text NOT NULL,
  "bigint6" bigint NOT NULL,
  "bigint7" bigint NOT NULL,
  "bigint8" bigint NOT NULL,
  "denormalizedData" jsonb NOT NULL,
  "textCol" text NOT NULL,
  "smallText" text NOT NULL,
  "createdAt" timestamp with time zone NOT NULL,
  "updatedAt" timestamp with time zone NOT NULL,
  CONSTRAINT test_pkey PRIMARY KEY (id)
);

SELECT "textCol", SUM("bigint1"), SUM("bigint2") -- etc, almost every single column gets aggregated
FROM "test_table"
WHERE "timestampCol" BETWEEN '2016-06-12' AND '2016-06-17'
GROUP BY "textCol"
ORDER BY SUM("bingint2"), SUM("bigint3") DESC -- the ORDER BY columns are dynamic, but there's only 4 possible combination of columns.
LIMIT 50;


Please correct me where my understanding is incorrect. In Postgres, I can either leverage an index on timestampCol or on textCol, but never both at the same time? The query plans I've pasted are meant to show the algorithms being picked by Postgres only. The real tables have a few million rows, not only ~66,000.

-
CREATE INDEX timestamp_col_index on test_table using btree ("timestampCol")


An index (btree) on "timestampCol" means that the query planner will slice the whole dataset to only keep the rows in between '2016-06-12' and '2016-06-17' before using a Hash Join or a Sort + GroupAggregate to group the rows by textCol.

```
GroupAggregate (cost=3925.50..4483.19 rows=22259 width=41) (actual time=80.764..125.342 rows=22663 loops=1)
Group Key: "textCol"
-> Sort (cost=3925.50..3981.45 rows=22380 width=41) (actual time=80.742..84.915 rows=22669 loops=1)
Sort Key: "textCol"
Sort Method: quicksort Memory: 2540kB

Solution

Idea 1

Judging by their names, the columns "denormalizedData" and "hugeText" seem to be comparatively big, probably many times as big as the columns involved in your query. Size matters for big queries like this. Very big values (> 2kb) for text or jsonb get "toasted", which can avert the worst. But even the remainder or smaller values stored inline can be several times as big as the columns relevant to your query, which span around 100 bytes.

Related:

  • Configuring PostgreSQL for read performance



Splitting columns relevant to the query into a separate 1:1 table might go a long way. (Depends on the complete situation. You add some storage overhead for another row header and another PK and writing to the tables gets a bit more complicated and expensive.)
Idea 2

Also (like you confirmed) only 4 columns are relevant to determine the top 50.

You might have an angle there for a much smaller materialized view (MV) containing just those columns plus "timestampCol" and "textCol" and only the "last 2 weeks" or "last month" of data. Run a fast query on the MV to identify the top 50 "textCol" and only retrieve those rows from the big table. Or, to be precise, just the additional columns not contained in your MV - you get sums for those in the first step.

You only need an index on ("textCol") for the big table. And another one on ("timestampCol") for the MV - which would only be used for instances of your query with a selective WHERE clause. Else, it will be cheaper to sequentially scan the whole MV.

If many of your queries cover the same period of time, you might go one step further: only save one row per "textCol" in the MV with pre-aggregated sums (maybe two or more MV for a couple of frequently used time periods). You get the idea. That should be much faster, yet.

You might even create the MVs with the whole result set and refresh before the first new query for the day.

Depending on exact numbers, you might combine both ideas.

Context

StackExchange Database Administrators Q#142729, answer score: 4

Revisions (0)

No revisions yet.