The data type uuid
is perfectly suited for the task. It only occupies 16 bytes as opposed to 37 bytes in RAM for the varchar
or text
representation. (Or 33 bytes on disk, but the odd number would require padding in many cases to make it 40 bytes effectively.) And the uuid
type has some more advantages.
Example:
SELECT md5('Store hash for long string, maybe for index?')::uuid AS md5_hash;
See:
You might consider other (cheaper) hashing functions if you don't need the cryptographic component of md5, but I would go with md5 for your use case (mostly read-only).
A word of warning: For your case (immutable once written
) a functionally dependent (pseudo-natural) PK is fine. But the same would be a pain where updates on text
are possible. Think of correcting a typo: the PK and all depending indexes, FK columns in "dozens of other tables" and other references would have to change as well. Table and index bloat, locking issues, slow updates, lost references, ...
If text
can change in normal operation, a surrogate PK would be a better choice. I suggest a bigserial
column (range -9223372036854775808 to +9223372036854775807
- that's nine quintillion two hundred twenty-three quadrillion three hundred seventy-two trillion thirty-six something billion) distinct values for billions of rows
. Might be a good idea in any case: 8 instead of 16 bytes for dozens of FK columns and indexes!). Or a random UUID for much bigger cardinalities or distributed systems. You can always store said md5 (as uuid
) additionally to find rows in the main table from the original text quickly. Related:
As for your query:
To address @Daniel's comment: If you prefer a representation without hyphens, remove the hyphens for display:
SELECT replace('90b7525e-84f6-4850-c2ef-b407fae3f271', '-', '')
But I wouldn't bother. The default representation is just fine. And the problem's really not the representation here.
If other parties should have a different approach and throw strings without hyphens into the mix, that's no problem, either. Postgres accepts several reasonable text representations as input for a uuid
. The manual:
PostgreSQL also accepts the following alternative forms for input: use
of upper-case digits, the standard format surrounded by braces,
omitting some or all hyphens, adding a hyphen after any group of four
digits. Examples are:
A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A11
{a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11}
a0eebc999c0b4ef8bb6d6bb9bd380a11
a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11
{a0eebc99-9c0b4ef8-bb6d6bb9-bd380a11}
What's more, the md5()
function returns text
, you would use decode()
to convert to bytea
and the default representation of that is:
SELECT decode(md5('Store hash for long string, maybe for index?'), 'hex')
\220\267R^\204\366HP\302\357\264\007\372\343\362q
You would have to encode()
again to get the original text representation:
SELECT encode(my_md5_as_bytea, 'hex');
To top it off, values stored as bytea
would occupy 20 bytes in RAM (and 17 bytes on disk, 24 with padding) due to the internal varlena
overhead, which is particularly unfavorable for size and performance of simple indexes.
Everything works in favor of a uuid
here.
Two indexes
Simply create two separate indexes. PostgreSQL will use both where appropriate.
CREATE INDEX ON table1 (account);
CREATE INDEX ON table1 USING GIN (json);
Using an extension
Or you can use the btree_gin
.
btree_gin provides sample GIN operator classes that implement B-tree equivalent behavior for the data types int2, int4, int8, float4, float8, timestamp with time zone, timestamp without time zone, time with time zone, time without time zone, date, interval, oid, money, "char", varchar, text, bytea, bit, varbit, macaddr, inet, and cidr.
It looks like this,
CREATE EXTENSION btree_gin;
CREATE INDEX ON table1 USING gin (account,json);
Under normal circumstances, I'd likely use two indexes.
Best Answer
Assuming a mostly immutable set ~ 100 currencies overall (you haven't been clear on that), and your given requirements, consider the simple approach: 1 table with 1 row per user and 1 column per currency. Like:
This has a massively smaller disk footprint than either of your two options so far.
4 bytes per currency in use (with
integer
), plus 16 bytes for the NULL bitmap. NULL storage is very cheap. See:Data type
integer
ornumeric
?Your option 1 (
jsonb
) at least doubles the size per currency in use by storing a key name for every amount. Wins with only very few currencies per user, storage-wise. Sums, calculations, indexing are slower and more complicated. Data integrity is hard to enforce.Your option 2 occupies ~ 44 bytes per currency (separate row). Very clean data model, flexible for adding / removing currencies on the fly, but wastes a lot of space, which makes everything slow.
A lot of reads for the whole wallet are as simple as:
You only need an index on
user_id
, which is provided by the PK.Getting the nightly sum of total amount from all users for each currency is as simple and as fast as can be:
No index for that.
If you have a couple of dozen currencies covering the lion's share of all entries, you could try a combined strategy: fixed columns for the regulars and a jsonb column for the rest. This combines minimum storage size with absolute flexibility - at the cost of more complicated queries and computations, as you have to combine both now. And much weaker means to enforce integrity.
I chose 70 currency columns to stay below the local optimum of 72 columns, before another 8 bytes are allocated for the NULL bitmap. A minor consideration. Chose a number that fits your data distribution.
Maintain a table of all allowed currencies - you do not want to search millions of rows to get the complete list. And use minimum-length key names in the
jsonb
column, like'{"A1":123}'
(2 bytes for the key) so not to waste GB of storage to repeating lengthy names over and over.