You can simplify further:
SELECT date_trunc('month', c.created_at) AS year_month
, p.reference,
, count (distinct c.id) as nb_crea,
, (sum(o.amount_ati_cents) / 100) as ca_euros
from creations c
join products p on p.id = c.product_id
join order_items i on i.creation_id = c.id
join bundles b on b.id = i.bundle_id
join orders o on o.id = b.order_id
where o.status NOT IN ('canceled', 'new')
and c.name like '% - Premium(%'
and p.reference like 'FLA_%'
GROUP BY year_month, reference
ORDER BY year_month DESC;
Or even:
SELECT to_char(c.created_at, 'YYYY-MM') AS year_month
...
GROUP BY 1, reference
ORDER BY 1 DESC;
date_trunc()
and to_char() in the manual.
This is not valid JSON; strings require double quotes.
Anyway, the table-valued function json_each() allows to enumerate a dynamically-sized array:
SELECT * FROM json_each('[["c2","c1","clone"],["c1","n0","clone"],["n0","p0","offspring"],["p0","a2","offspring"],["a2","a1","offspring"],["a1",null,null]]');
key value type atom id parent fullkey path
---------- ------------------- ---------- ---------- ---------- ---------- ---------- ----------
0 ["c2","c1","clone"] array 1 $[0] $
1 ["c1","n0","clone"] array 5 $[1] $
2 ["n0","p0","offspri array 9 $[2] $
3 ["p0","a2","offspri array 13 $[3] $
4 ["a2","a1","offspri array 17 $[4] $
5 ["a1",null,null] array 21 $[5] $
For extracting the values from an array with a known size, we can use json_extract():
SELECT json_extract(value, '$[0]'), json_extract(value, '$[1]'), json_extract(value, '$[2]')
FROM json_each('[["c2","c1","clone"],["c1","n0","clone"],["n0","p0","offspring"],["p0","a2","offspring"],["a2","a1","offspring"],["a1",null,null]]');
c2 c1 clone
c1 n0 clone
n0 p0 offspring
p0 a2 offspring
a2 a1 offspring
a1
This can then simply be plugged into an INSERT … SELECT … statement.
Best Answer
You can remove given keys with the
-
operator (the inverse of what you have in mind):But I wouldn't know of a built-in function or operator doing what you ask for in the current Postgres 13.
Remarkably, it's the other way round with SQL
SELECT
statements. There you get a positive list withSELECT a,d,e FROM tbl
, but cannot simply get "all columns except [a,d,e]" like you can get "all keys except [a,d,e]" from a JSON object with the above operator. I would love to have the complementary feature for each.Also can't think of an easy way with the SQL/JSON path language (Postgres 12+).
Workaround for
jsonb
You can create a simple function like:
db<>fiddle here
Call:
Returns:
You get NULL for NULL or empty input for either argument (as demonstrated in the fiddle). So I declared the function
STRICT
(a.k.a.RETURNS NULL ON NULL INPUT
), even though the nested functionjsonb_object_agg()
is notSTRICT
. That could mess with function inlining, but since this one cannot be inlined anyway (containing an aggregate function), we might as well. See:IMMUTABLE
is only kind of true. Keep reading.Order of keys?
jsonb
does not preserve the order of keys (keys are ordered in deterministic fashion internally), so we need not bother about the order of input rows in the aggregation. Except when there can be duplicates, then the latest copy from the input prevails. The manual:Meaning, if there can be duplicate keys in the input, the order of rows is significant. My function does not explicitly order rows, so the result would not strictly be
IMMUTABLE
. But the input comes directly fromjsonb_each()
and there cannot be duplicate keys injsonb
(unlikejson
). Possible duplicates in the input array of keys are irrelevant for the= ANY
construct. So duplicates can never happen.IMMUTABLE
after all.Again, this would prohibit function inlining - if it could be inlined to begin with.
Equivalent for
json
For completeness: the same for
json
instead ofjsonb
:The first variant does not try to preserve the given order of keys, and it eliminates possible duplicates in the input array of keys (but not possible duplicates from the
json
input!):The second variant preserves the given order of keys and keeps all possible duplicates. (If the same key is 2x in the
json
input, and 3x in the input array of keys, you get it 6x in the result.):About
WITH ORDINALITY
:I made the
json
functionsSTABLE
becausejson_object_agg()
is onlySTABLE
as opposed tojsonb_object_agg()
, which isIMMUTABLE
.You could use
json_build_object ('a', js->'a', 'd', js->'d', 'e', js->'e')
but that would include all keys, with a NULL value if not found. Not exactly your request. And you couldn't tell the difference between a missing key and the same key with an actual NULL value.