If you want to get right down to it, that's not the best approach from a theoretical perspective, because you're copying data around in your database that you really should be deriving.
CREATE ALGORITHM=MERGE VIEW products_with_packaging_info AS
SELECT p.*,
pt.width as packaging_width,
pt.height as packaging_height,
pt.weight as packaging_weight,
pt.case_count AS packaging_case_count
FROM PRODUCTS p
JOIN PACK_TYPES pt ON pt.id = p.packaging_type;
Done. SELECT
queries against this view work exactly the same as queries against either table individually, as long as every product has a pack type. Queries against this view can still take advantage of the indexes on the base tables, and there's no overhead involved with copying the attributes from one table to another, which always has the potential for update anomalies.
You might even be surprised to find that the columns in the view can actually be updated as if it were a table, with updates propagating down into the base tables.
I offer this suggestion because a well-designed database should be such that it is impossible to get two different answers to the same question. For example, if a PACK_TYPES row is changed because an error is found, how do its new values propagate backwards into products?
But if you really want to take the trigger approach, that looks something like this:
DELIMITER $$
DROP TRIGGER IF EXISTS PRODUCTS_bu $$
CREATE TRIGGER PRODUCTS_bu BEFORE UPDATE ON PRODUCTS FOR EACH ROW
BEGIN
IF NOT (NEW.packaging_type <=> OLD.packaging_type) THEN
BEGIN
DECLARE my_width INT DEFAULT NULL; -- using
DECLARE my_height INT DEFAULT NULL; -- the
DECLARE my_weight INT DEFAULT NULL; -- appropriate
DECLARE my_case_count INT DEFAULT NULL; -- data types here
SELECT width, height, weight, case_count
FROM PACK_TYPES
WHERE id = NEW.packaging_type
INTO my_width, my_height, my_weight, my_case_count;
SET NEW.width = my_width, NEW.height = my_height, NEW.weight = my_weight, NEW.case_count = my_case_count;
END;
END IF;
END $$
DELIMITER ;
The <=>
"spaceship" is the "null-safe equality operator" which constrains "NOT [possibly null] = [possibly null]" to always be either TRUE
or FALSE
; this is needed because [possibly null] != [possibly null] will never be true if either expression is NULL
. This is the case because, logically, "NOT (FALSE)" is "TRUE" while "NOT (NULL)" is "NULL."
I could have declared the variables at the beginning and avoided the inner BEGIN
/END
but it seems optimal to avoid that work until we know we actually need to execute the inner logic in the first place, which is avoided whenever 'packaging_type' hasn't actually changed on a row for a given update query. Within a block, declarations have to precede other statements, so delaying the declarations requires the addition of the inner BEGIN
/END
.
You would also want a similar trigger for BEFORE INSERT
which would be identical except you'd remove the 4 lines starting with IF
... BEGIN
... END
... END IF
from the body of the procedure, use a new trigger name, and change BEFORE UPDATE
to BEFORE INSERT
.
It's BEFORE
-- not AFTER
-- in both cases, because the trigger fires BEFORE
the newly-inserted or newly-updated row is written to the database.
According to your question the sample tables are follows:
Table:ITEMS
Field's Name Type
------------ ------
Serialno Number (PK)
Qty Number
Table:ORDERS
Field's Name Type
------------ ------
Orderno Number (PK)
Serialno Number (PK)(FK)
Qty Number
The following trigger may solve your problem.
CREATE OR REPLACE TRIGGER check_qty
AFTER INSERT OR UPDATE ON ORDERS
FOR EACH ROW
DECLARE
item_qty ITEMS.qty%type;
BEGIN
SELECT qty
INTO item_qty
FROM ITEMS
WHERE serialno = :NEW.serialno;
IF(:NEW.qty>item_qty)THEN
RAISE_APPLICATION_ERROR(-20250,'QTY is greater than stock qty!');
END IF;
END;
Best Answer
It could go something like this. Grab the old data into a temp table. Insert it into the history table then delete from the live table. I'll leave the error handling as an exercise for the reader.