The error msg says relation "model_cd201707" does not exist
. I suppose there is no table of that name in the search_path
then? There is a confusion of table names in your question. modelsample...
, model_cd...
, model...
tb_moldsummary
. Something is not right here ...
That aside, after assigning passed parameter values to the variables model_cd
and processdate
you are not using them at all (yet).
Assuming your 2 passed parameters are supposed to be used to build the table name, it might work like this:
CREATE OR REPLACE FUNCTION trg_insert_data_func()
RETURNS TRIGGER AS
$func$
BEGIN
EXECUTE format(
$x$INSERT INTO tb_moldsummary AS t
SELECT $1.machine
, $1.model
, split_part($1.lot, '_', 1)
, right($1.lot, position('_' IN lot) * -1)
, COUNT(*) -- never null in this query
, $1.datetimestamp
FROM %I
WHERE lot = $1.lot
AND machine = $1.machine
GROUP BY machine, model, lot
ON CONFLICT ON CONSTRAINT tb_summary_unique -- constraint has same name?
DO UPDATE
SET machine = $1.machine
, totalshots = t.totalshots + 1
, datetimestamp = $1.datetimestamp
$x$, TG_ARGV[0]
)
USING NEW;
RETURN NEW;
END
$func$
LANGUAGE plpgsql;
Trigger (passing a single concatenated parameter as table name):
DO
$$
BEGIN
EXECUTE format(
'CREATE TRIGGER insert_data_trigger
AFTER INSERT ON %1$s
FOR EACH ROW EXECUTE PROCEDURE trg_insert_data_func(%1$L)'
, 'modelsample' || to_char(now(), 'YYYYMM')
);
END
$$;
Note how I use %1$s
for the (unquoted) table name in the SQL statement and %1$L
for the parameter (quoted as string literal). %I
is not needed in this case for a known legal identifier. See the manual on format()
for details.
You may want to schema-qualify function and table names to be unambiguous. Like public.modelsample
. More:
Be aware that the date component built into the trigger definition is immutable once created. Only the creating statement itself (and the trigger function) are "dynamic".
Related:
But that seems like needless complication. No need to pass the name of the table for which the trigger is called. Some special variables are available automatically in plpgsql trigger functions. Among others: TG_TABLE_SCHEMA
and TG_TABLE_NAME
. So use dynamic SQL in the trigger function, but the trigger itself can be simple and static:
CREATE OR REPLACE FUNCTION trg_insert_data_func()
RETURNS TRIGGER AS
$func$
BEGIN
EXECUTE format(
$x$INSERT INTO tb_moldsummary AS t
SELECT $1.machine
, $1.model
, split_part($1.lot, '_', 1)
, right($1.lot, position('_' IN lot) * -1)
, COUNT(*) -- never null in this query
, $1.datetimestamp
FROM %I.%I -- !!
WHERE lot = $1.lot
AND machine = $1.machine
GROUP BY machine, model, lot
ON CONFLICT ON CONSTRAINT tb_summary_unique -- constraint has same name?
DO UPDATE
SET machine = $1.machine
, totalshots = t.totalshots + 1
, datetimestamp = $1.datetimestamp
$x$, TG_TABLE_SCHEMA, TG_TABLE_NAME -- !!
)
USING NEW;
RETURN NEW;
END
$func$
LANGUAGE plpgsql;
And no parameters in the trigger itself.
Took a bit of figuring out (more used to PostgreSQL where things are much easier!), but if you consult the fine manual here, under 12.16.2 Functions That Create JSON Values
, there's the JSON_ARRAY
function, but it's not much use really - at least in this case!
To answer the question "select and it return JSON"
, there are two ways of doing this, both rather painful!
You can either
use a "hack" - see the db-fiddle here,
or use one of the new MySQL supplied JSON functions here - which, ironically, appears to be even more of a hack than the hack itself! Only with MySQL! :-) (fiddle here).
Both answers use the MySQL GROUP_CONCAT
function - this post helped. You might want to set the group_concat_max_len system variable to a bit more than its default (a paltry 1024)!
The first query is, as you can imagine, messy (DDL
and DML
at the bottom of this answer):
SELECT CONCAT('[', better_result, ']') AS best_result FROM
(
SELECT GROUP_CONCAT('{', my_json, '}' SEPARATOR ',') AS better_result FROM
(
SELECT
CONCAT
(
'"name_field":' , '"', name_field , '"', ','
'"address_field":', '"', address_field, '"', ','
'"contact_age":' , contact_age
) AS my_json
FROM contact
) AS more_json
) AS yet_more_json;
Result:
[{"name_field":"Mary","address_field":"address one","contact_age":25},{"name_field":"Fred","address_field":"address two","contact_age":35},{"name_field":"Bill","address_field":"address three","contact_age":47}]
which is correct, but, let's face it, a bit of a nightmare!
Then there's the MySQL JSON_ARRAY()
approach (which is even messier - thanks MySQL for your (non-existent) implementation of the TRANSLATE()
function!).
SELECT
CONCAT
('[', REPLACE
(
REPLACE
(
GROUP_CONCAT
(
JSON_ARRAY
(
'name_field:', name_field,
'address_field:', address_field,
'age_field:', contact_age
) SEPARATOR ','
), '[', '{'
), ']', '}'
), ']'
)
AS best_result2
FROM contact
Same result!
==== TABLE CREATION and INSERT DDL and DML ============
CREATE TABLE contact
(
name_field VARCHAR (5) NOT NULL,
address_field VARCHAR (20) NOT NULL,
contact_age INTEGER NOT NULL
);
INSERT INTO contact
VALUES
('Mary', 'address one', 25),
('Fred', 'address two', 35),
('Bill', 'address three', 47);
Best Answer
Immediate problem: type
json
There are no equality or inequality operators defined for the Postgres data type
json
. See:Hence row comparisons involving a
json
column are also bound to fail - with notable exceptions, see below.There are (imperfect!) workarounds, but don't bother and switch to
jsonb
, which allows these checks.Fix function & trigger
You also cannot compare
OLD
andNEW
forINSERT
operations, where there is noOLD
- only forUPDATE
orDELETE
. I suggest a column default formodified_at
, or a separate trigger function and triggerBEFORE INSERT
to overwrite any input withnow()
unconditionally.And this function & trigger for
UPDATE
:db<>fiddle here (Postgres 9.6)
Nothing different in Postgres 13: db<>fiddle here
RETURN NULL;
is optional. If the row has not changed at all (and while running the trigger function already anyway) you might as well skip theUPDATE
for the row if it does not change it. That save the considerable cost of writing a new row version in vain. If other triggers depend on it, or for other exotic edge cases, you may still want to go through with it. ThenRETURN NEW;
- unconditionally at the end of the trigger function.RETURN OLD;
would have no benefit after we have established both row values are identical.Row comparison with
json
column can still work?Oddly, yes. Row comparison is processed left-to-right. For the expression
NEW IS DISTINCT FROM OLD
Postgres can returntrue
immediately on finding the first mismatch (before encountering the problem withjson
). This may or may not happen, depending on the code path. Quoting the manual on row-wise comparison :