Your trigger runs after the updates have occurred on the table. If you update the table again in the trigger a new update is performed. See also the discussion about RECURSIVE_TRIGGERS.
If you want to run code before the update you will have to create an INSTEAD OF trigger.
Whether having to do two updates instead of one will become the bottleneck is entirely subject to the efficiency of the update in your trigger, and of course whether this UPDATE is already on the critical path or not. You probably have a primary key on job id, the pages are hot in buffer pool and you cannot have lock conflicts. So basically is only subject to a) how fast can your log accept writes and b) whether the version store (which serves the deleted and inserted pseudo-tables) can keep up.
If you ask me: such business logic status housekeeping belongs in the application, not in a trigger...
If you want to get right down to it, that's not the best approach from a theoretical perspective, because you're copying data around in your database that you really should be deriving.
CREATE ALGORITHM=MERGE VIEW products_with_packaging_info AS
SELECT p.*,
pt.width as packaging_width,
pt.height as packaging_height,
pt.weight as packaging_weight,
pt.case_count AS packaging_case_count
FROM PRODUCTS p
JOIN PACK_TYPES pt ON pt.id = p.packaging_type;
Done. SELECT
queries against this view work exactly the same as queries against either table individually, as long as every product has a pack type. Queries against this view can still take advantage of the indexes on the base tables, and there's no overhead involved with copying the attributes from one table to another, which always has the potential for update anomalies.
You might even be surprised to find that the columns in the view can actually be updated as if it were a table, with updates propagating down into the base tables.
I offer this suggestion because a well-designed database should be such that it is impossible to get two different answers to the same question. For example, if a PACK_TYPES row is changed because an error is found, how do its new values propagate backwards into products?
But if you really want to take the trigger approach, that looks something like this:
DELIMITER $$
DROP TRIGGER IF EXISTS PRODUCTS_bu $$
CREATE TRIGGER PRODUCTS_bu BEFORE UPDATE ON PRODUCTS FOR EACH ROW
BEGIN
IF NOT (NEW.packaging_type <=> OLD.packaging_type) THEN
BEGIN
DECLARE my_width INT DEFAULT NULL; -- using
DECLARE my_height INT DEFAULT NULL; -- the
DECLARE my_weight INT DEFAULT NULL; -- appropriate
DECLARE my_case_count INT DEFAULT NULL; -- data types here
SELECT width, height, weight, case_count
FROM PACK_TYPES
WHERE id = NEW.packaging_type
INTO my_width, my_height, my_weight, my_case_count;
SET NEW.width = my_width, NEW.height = my_height, NEW.weight = my_weight, NEW.case_count = my_case_count;
END;
END IF;
END $$
DELIMITER ;
The <=>
"spaceship" is the "null-safe equality operator" which constrains "NOT [possibly null] = [possibly null]" to always be either TRUE
or FALSE
; this is needed because [possibly null] != [possibly null] will never be true if either expression is NULL
. This is the case because, logically, "NOT (FALSE)" is "TRUE" while "NOT (NULL)" is "NULL."
I could have declared the variables at the beginning and avoided the inner BEGIN
/END
but it seems optimal to avoid that work until we know we actually need to execute the inner logic in the first place, which is avoided whenever 'packaging_type' hasn't actually changed on a row for a given update query. Within a block, declarations have to precede other statements, so delaying the declarations requires the addition of the inner BEGIN
/END
.
You would also want a similar trigger for BEFORE INSERT
which would be identical except you'd remove the 4 lines starting with IF
... BEGIN
... END
... END IF
from the body of the procedure, use a new trigger name, and change BEFORE UPDATE
to BEFORE INSERT
.
It's BEFORE
-- not AFTER
-- in both cases, because the trigger fires BEFORE
the newly-inserted or newly-updated row is written to the database.
Best Answer
I'm not sure about performance impact, but you can convert row to jasonb, remove field, and compare json objects.
So changing
WHEN (OLD.* IS DISTINCT FROM NEW.*)
towhen (row_to_json(old)::jsonb - 'last_seen' is distinct from row_to_json(new)::jsonb - 'last_seen')
will do the job.Also, if you want to modify
NEW.updated
, it should beBEFORE UPDATE
trigger, notAFTER UPDATE
.