At the scope at which you're working. I think JSONB is ideal. It handles deeply nested structures and structures with array-keys. It's also standardized and in the spec for sql2016.
In addition, as I answered here, there is an extension that should help you with space consumption called ZSON,
ZSON is a PostgreSQL extension for transparent JSONB compression. Compression is based on a shared dictionary of strings most frequently used in specific JSONB documents (not only keys, but also values, array elements, etc).
In some cases ZSON can save half of your disk space and give you about 10% more TPS. Memory is saved as well. See docs/benchmark.md. Everything depends on your data and workload, though. Don't believe any benchmarks, re-check everything on your data, configuration, hardware, workload and PostgreSQL version.
You may want to look into ZSON.
you can concatenate them:
set hstore_col = hstore('new_key_1',foo.new_value_1)||hstore('new_key_2',foo.new_value_2)
Alternatively use the constructor function that accepts two arrays:
set hstore_col = hstore(array['new_key_1', 'new_key_2'],array[foo.new_value_1, foo.new_value_2])
The second solution will be easier to extend when you have more keys then just two
If you don't want to overwrite the current value, then append that to the original one:
set hstore_col = hstore_col || hstore('new_key_1',foo.new_value_1)||hstore('new_key_2',foo.new_value_2)
or
set hstore_col = hstore_col || hstore(array['new_key_1', 'new_key_2'],array[foo.new_value_1, foo.new_value_2])
Best Answer
This can be done, very efficiently, too. Not in a single statement, though, since SQL demands to know the return type at call time. So you need two steps. The solution involves a number of advanced techniques ...
Assuming the same table as @Denver in his answer:
Solution 1: Simple
SELECT
After I wrote the crosstab solution below it struck me that a simple "brute force" solution is probably faster. Basically, the query @Denver already posted, built dynamically:
Step 1a: Generate query
The subquery
(SELECT id, hstore_col AS h FROM hstore_test)
is just to get in the column aliash
for yourhstore
column.Step 1b: Execute query
This generates a query of the form:
Result:
Solution 2:
crosstab()
For lots of keys this may perform better. Probably not. You'll have to test. Result is the same as for solution 1.
You need the additional extension
tablefunc
which provides thecrosstab()
function. Read this first if you are not familiar:Step 2a: Generate query
Note the nested levels of dollar-quoting.
I use this explicit form in the main query instead of the short
CROSS JOIN
in the auxiliary query to preserve rows with empty or NULLhstore
values:Related:
Step 2b: Execute query
This generates a query of the form:
You may want to inspect it for plausibility before running the first time. This should deliver optimized performance.
Notes
Both solutions work for any number of keys up to the physical limit of ~ 1600 columns in a Postgres table.
Both also work for keys of any shape or form up to the maximum length for identifiers, which is 63 bytes per default.
Besides the hstore function
each()
that was already mentioned by s.m., I also use the related functionskeys()
to identify keys.Be sure to quote column names correctly to avoid possible SQL injection attacks by way of maliciously formed key names. I take care of that with
quote_literal()
andquote_ident()
.