Order of source rows irrelevant, source is array column
Your updated requirements are much simpler for two reasons:
- We don't need to establish any order among source rows.
- And the array already establishes order among array elements.
Simple table:
CREATE TABLE vals (j int[]);
Simple query:
SELECT ARRAY(SELECT elem FROM vals, unnest(j) elem) AS arr;
Order of source rows relevant, source are integer columns
Not as simple. Based on this table:
CREATE TABLE vals (i int, j int);
There are various ways to preserve order. I think this should be among the fastest - with an ARRAY constructor:
SELECT ARRAY(
SELECT t.elem
FROM (SELECT i, j FROM vals ORDER BY i, j) v
, LATERAL (VALUES (v.i), (v.j)) t(elem)
) AS arr;
My first draft had ORDER BY i, j
after the LATERAL
join, which would not guarantee the order of columns. I pulled the ORDER BY
into a subselect to produce the correct order now. This works, but it's not documented. To be absolutely sure, you have to add an explicit ORDER BY
later:
SELECT ARRAY (
SELECT elem
FROM vals v, LATERAL (VALUES (1, v.i), (2, v.j)) t(ordr, elem)
ORDER BY v.i, v.j, ordr
) AS arr;
About guaranteed order:
About the VALUES
expression in a LATERAL
join:
Another way with - this time with array_agg()
:
SELECT array_agg(elem) AS arr
FROM (
SELECT i, j, 1 AS ordr, i AS elem FROM vals
UNION ALL
SELECT i, j, 2 AS ordr, j AS elem FROM vals
ORDER BY i, j, ordr
) sub;
The ARRAY constructor is faster, array_agg()
is easier to integrate with multiple result columns.
SQL Fiddle.
The query in your test case would actually work in Postgres 9.5 (currently beta, to be released in the course of 2015). Quoting the release notes for pg 9.5:
Allow array_agg()
and ARRAY()
to take arrays as inputs (Ali Akbar, Tom
Lane)
However, you get a 2-dimenstional array instead, not your desired result. You can easily implement the same (but slower) aggregate function in older versions:
Something like this should work:
update the_table
set data = x.data
from (
select t.id, array_agg(d.x order by d.idx) as data
from the_table t, unnest(data) with ordinality as d(x, idx)
group by t.id
) x
where x.id = the_table.id;
Sample psql output:
postgres=> create table the_table (id integer primary key, data double precision[]);
CREATE TABLE
postgres=> insert into the_table values
postgres-> (1, '{{2.20751909662576},{2.20679071024511},{2.20615506273571},{2.2055910715332},{2.20507756148068},{2.20459435596551},{2.20412336646322},{2.20364958013081}}'),
postgres-> (2, '{{2.19780662274576},{2.19740415504621},{2.19705997866427},{2.19677947299938},{2.1965673849541},{2.19642825612809},{2.1963668334181}}');
INSERT 0 2
postgres=> select *
postgres-> from the_table;
id | data
----+----------------------------------------------------------------------------------------------------------------------------------------------------------
1 | {{2.20751909662576},{2.20679071024511},{2.20615506273571},{2.2055910715332},{2.20507756148068},{2.20459435596551},{2.20412336646322},{2.20364958013081}}
2 | {{2.19780662274576},{2.19740415504621},{2.19705997866427},{2.19677947299938},{2.1965673849541},{2.19642825612809},{2.1963668334181}}
(2 rows)
postgres=> update the_table
postgres-> set data = x.data
postgres-> from (
postgres(> select t.id, array_agg(d.x order by d.idx) as data
postgres(> from the_table t, unnest(data) with ordinality as d(x, idx)
postgres(> group by t.id
postgres(> ) x
postgres-> where x.id = the_table.id;
UPDATE 2
postgres=> select *
postgres-> from the_table;
id | data
----+------------------------------------------------------------------------------------------------------------------------------------------
1 | {2.20751909662576,2.20679071024511,2.20615506273571,2.2055910715332,2.20507756148068,2.20459435596551,2.20412336646322,2.20364958013081}
2 | {2.19780662274576,2.19740415504621,2.19705997866427,2.19677947299938,2.1965673849541,2.19642825612809,2.1963668334181}
(2 rows)
postgres=>
Best Answer
You need to unnest the JSON array with
jsonb_array_elements
.You can do this in a correlated subquery:
DB Fiddle