Your additional calculated value:
(select count(t1.num) from T t1)
Is a scalar subquery, which is a dynamic rather than static expression. As such it's treated the same as a column as far as the aggregate is concerned and needs to be included in the group by clause to avoid the ORA-00937: not a single-group group function
error
However, oracle does not allow subqueries as part of the group by clause and trying to include the scalar subquery and/or the whole case statement:
group by (case (select count(*) cnt from t t1) when 0 then 1 else 0 end)
just results in an ORA-22818: subquery expressions not allowed here
error.
The only ways around this are to either convert your scalar subquery to an aggregate value like so:
max(case (select count(*) cnt from t t1) when 0 then 1 else 0 end)
or
(case max((select count(*) cnt from t t1)) when 0 then 1 else 0 end)
or rewrite your query to move the unaggregated scalar subquery out of the aggregated query:
select (case (select count(*) cnt from t t1) when 0 then 1 else 0 end) * sum
from (select sum(t3.num) sum from t t3) t2;
or precompute your scalar subquery so it can be used in the group by clause:
select case t1.cnt when 0 then 1 else 0 end * sum(t2.num)
from t t2
, (select count(*) cnt from t) t1
group by case t1.cnt when 0 then 1 else 0 end
1st case
You seem to forget the valid_during
range. As your third case suggests, there can be multiple entries per (rec_id, val)
, so you must select the right one:
UPDATE master m
SET valid_on = f_array_sort(m.valid_on || u.valid_on) -- sorted array, see below
FROM updates u
WHERE m.rec_id = u.rec_id
AND m.valid_during @> u.valid_on -- additional check
AND m.val = u.val
AND NOT m.valid_on @> ARRAY[u.valid_on];
I assume the whole possible date range is always covered for each existing rec_id
and valid_during
shall not overlap per rec_id
, or you'd have to do more.
After installing the additional module btree_gist
, add an exclusion constraint to rule out overlapping date ranges if you don't have one, yet:
ALTER TABLE master ADD CONSTRAINT EXCLUDE
USING gist (rec_id WITH =, valid_during WITH &&) -- disallow overlap
The GiST index this is implemented with is also a perfect match for the query. Details:
2nd / 3rd case
Assuming that every date range starts with the smallest date in the (now sorted!) array: lower(m.valid_during) = m.valid_on[1]
. I would enforce that with a CHECK
constraint.
Here we need to create one or two new rows
In the 2nd case it is enough to shrink the range of the old row and insert one new row
In the 3rd case we update the old row with the left half of array and range, insert the new row and finally insert the with the right half of array and range.
Helper functions
To keep it simple I introduce a new constraint: every array is sorted. Use this helper function
-- sort array
CREATE OR REPLACE FUNCTION f_array_sort(anyarray)
RETURNS anyarray LANGUAGE sql IMMUTABLE AS
$$SELECT ARRAY (SELECT unnest($1) ORDER BY 1)$$;
I don't need your helper function arraymin()
any more, but it could be simplified to:
CREATE OR REPLACE FUNCTION f_array_min(anyarray)
RETURNS anyelement LANGUAGE sql IMMUTABLE AS
$$SELECT min(a) FROM unnest($1) a$$;
Two more to get the left and right half of an array split at a given element:
-- split left array at given element
CREATE OR REPLACE FUNCTION f_array_left(anyarray, anyelement)
RETURNS anyarray LANGUAGE sql IMMUTABLE AS
$$SELECT ARRAY (SELECT * FROM unnest($1) a WHERE a < $2 ORDER BY 1)$$;
-- split right array at given element
CREATE OR REPLACE FUNCTION f_array_right(anyarray, anyelement)
RETURNS anyarray LANGUAGE sql IMMUTABLE AS
$$SELECT ARRAY (SELECT * FROM unnest($1) a WHERE a >= $2 ORDER BY 1)$$;
Query
This does all the rest:
WITH u AS ( -- identify candidates
SELECT m.id, rec_id, m.val, m.valid_on, m.valid_during
, u.val AS u_val, u.valid_on AS u_valid_on
FROM master m
JOIN updates u USING (rec_id)
WHERE m.val <> u.val
AND m.valid_during @> u.valid_on
FOR UPDATE -- lock for update
)
, upd1 AS ( -- case 2: no overlap, no split
UPDATE master m -- shrink old row
SET valid_during = daterange(lower(u.valid_during), u.u_valid_on)
FROM u
WHERE u.id = m.id
AND u.u_valid_on > m.valid_on[array_upper(m.valid_on, 1)]
RETURNING m.id
)
, ins1 AS ( -- insert new row
INSERT INTO master (rec_id, val, valid_on, valid_during)
SELECT u.rec_id, u.u_val, ARRAY[u.u_valid_on]
, daterange(u.u_valid_on, upper(u.valid_during))
FROM upd1
JOIN u USING (id)
)
, upd2 AS ( -- case 3: overlap, need to split row
UPDATE master m -- shrink to first half
SET valid_during = daterange(lower(u.valid_during), u.u_valid_on)
, valid_on = f_array_left(u.valid_on, u.u_valid_on)
FROM u
LEFT JOIN upd1 USING (id)
WHERE upd1.id IS NULL -- all others
AND u.id = m.id
RETURNING m.id, f_array_right(u.valid_on, u.u_valid_on) AS arr_right
)
INSERT INTO master (rec_id, val, valid_on, valid_during)
-- new row
SELECT u.rec_id, u.u_val, ARRAY[u.u_valid_on]
, daterange(u.u_valid_on, upd2.arr_right[1])
FROM upd2
JOIN u USING (id)
UNION ALL -- second half of old row
SELECT u.rec_id, u.val, upd2.arr_right
, daterange(upd2.arr_right[1], upper(u.valid_during))
FROM upd2
JOIN u USING (id);
SQL Fiddle.
Notes
You need to understand the concept of data-modifying CTEs (writeable CTEs), before you touch this. Judging from the code you provided, you know your way around Postgres.
FOR UPDATE
is to avoid race conditions with concurrent write access. If you are the only user writing to the tables, you don't need it.
I took a piece of paper and drew a timeline so not to get lost in all of this.
Each row is only updated / inserted once, and operations are simple and roughly optimized. No expensive window functions. This should perform well. Much faster than your previous approach in any case.
It would be a bit less confusing if you'd use distinct column names for u.valid_on
and m.valid_on
, which are related but different things.
I compute the right half of the split array in the RETURNING
clause of CTE upd2
: f_array_right(u.valid_on, u.u_valid_on) AS arr_right
, because I need it several times in the next step. This is a (legal) trick to save one more CTE.
As for solutions that don't involve unnesting the master table
: You have to unnest the array valid_on
either way, to split it, at least as long as it's not sorted. Also, your helper function arraymin()
is already unnesting it anyway.
Best Answer
Your idea would be a pain to enforce under concurrent load. Instead, just keep adding new rows (
INSERT
only). There are simple and fast queries to get the current row(s) for each cat.Make sure, that
updated_at
is current. I added a column default and a trigger for that. To rule out duplicate entries for the same cat and the same timestamp, add a UNIQUE constraint.Or, to be absolutely sure, make that:
The default value normally takes care of inserts cheaply, but it can be overruled with explicitly inserting a value. The trigger overrules no matter what.
To get the "current" row for a given cat:
This is extremely fast while you have a matching index:
In modern Postgres versions you could use a unique covering index to replace index and
UNIQUE
constraint. See:To get the last three (live) rows, use the same query with
LIMIT 3
:From time to time (as your db load and schedule permit/require) delete deprecated rows:
db<>fiddle here