Data alignment and storage size
Actually, the overhead per index tuple is 8 byte for the tuple header plus 4 byte for the item identifier.
Related:
We have three columns for the primary key:
PRIMARY KEY ("Timestamp" , "TimestampIndex" , "KeyTag")
"Timestamp" timestamp (8 bytes)
"TimestampIndex" smallint (2 bytes)
"KeyTag" integer (4 bytes)
Results in:
4 bytes for item identifier in the page header (not counting towards multiple of 8 bytes)
8 bytes for the index tuple header
8 bytes "Timestamp"
2 bytes "TimestampIndex"
2 bytes padding for data alignment
4 bytes "KeyTag"
0 padding to the nearest multiple of 8 bytes
-----
28 bytes per index tuple; plus some bytes of overhead.
About measuring object size in this related answer:
Order of columns in a multicolumn index
Read these two questions and answers to understand:
The way you have your index (primary key), you can retrieve rows without a sorting step, that's appealing, especially with LIMIT
. But retrieving the rows seems extremely expensive.
Generally, in a multi-column index, "equality" columns should go first and "range" columns last:
Therefore, try an additional index with reversed column order:
CREATE INDEX analogransition_mult_idx1
ON "AnalogTransition" ("KeyTag", "TimestampIndex", "Timestamp");
It depends on data distribution. But with millions of row, even billion of rows
this might be substantially faster.
Tuple size is 8 bytes bigger, due to data alignment & padding. If you are using this as plain index, you might try to drop the third column "Timestamp"
. May be a bit faster or not (since it might help with sorting).
You might want to keep both indexes. Depending on a number of factors, your original index may be preferable - in particular with a small LIMIT
.
autovacuum and table statistics
Your table statistics need to be up to date. I am sure you have autovacuum running.
Since your table seems to be huge and statistics important for the right query plan, I would substantially increase the statistics target for relevant columns:
ALTER TABLE "AnalogTransition" ALTER "Timestamp" SET STATISTICS 1000;
... or even higher with billions of rows. Maximum is 10000, default is 100.
Do that for all columns involved in WHERE
or ORDER BY
clauses. Then run ANALYZE
.
Table layout
While being at it, if you apply what you have learned about data alignment and padding, this optimized table layout should save some disk space and help performance a little (ignoring pk & fk):
CREATE TABLE "AnalogTransition"(
"Timestamp" timestamp with time zone NOT NULL,
"KeyTag" integer NOT NULL,
"TimestampIndex" smallint NOT NULL,
"TimestampQuality" smallint,
"UpdateTimestamp" timestamp without time zone, -- (UTC)
"QualityFlags" smallint,
"Quality" boolean,
"Value" numeric
);
CLUSTER
/ pg_repack / pg_squeeze
To optimize read performance for queries that use a certain index (be it your original one or my suggested alternative), you can rewrite the table in the physical order of the index. CLUSTER
does that, but it's rather invasive and requires an exclusive lock for the duration of the operation.
pg_repack
is a more sophisticated alternative that can do the same without exclusive lock on the table.
pg_squeeze
is a later, similar tool (have not used it, yet).
This can help substantially with huge tables, since much fewer blocks of the table have to be read.
RAM
Generally, 2GB of physical RAM is just not enough to deal with billions of rows quickly. More RAM might go a long way - accompanied by adapted setting: obviously a bigger effective_cache_size
to begin with.
You cannot avoid in it in all cases (if more than 50% of the rows have the same value in the column.) One way to achieve a "shuffle" result, similar to what you want is by using window function:
WITH cte AS
( SELECT *,
ROW_NUMBER() OVER (PARTITION BY colX) AS rn
FROM tableX
)
SELECT *
FROM cte
ORDER BY rn, colX ;
The above will not avoid all cases though. If for example, the values in the column are 1,1,1,1,2,2,3
, you'll get:
1,2,3,1,2,1,1
and not the (better):
1,2,1,3,1,2,1
Best Answer
There is a general technique to achieve this, with
UNION ALL
andLIMIT
:Postgres evaluates nested
SELECT
s in order and stops as soon as enough rows have been returned. The rest is never executed.This optimization does not happen with an outer
ORDER BY
, which forces Postgres to collect all candidate rows and sort before applying theLIMIT
. Nor does it work forUNION
(instead ofUNION ALL
) which also considers all rows before removing duplicates and, finally, theLIMIT
.You need parentheses around each nested
SELECT
that hasORDER BY
orLIMIT
in addition to the outerLIMIT
.Related: