Other than the ability to access to the row data of a single row that is available inside the body of a BEFORE
-action trigger (which can only be used to examine or modify the row data, not cause the row to no longer exist unless your intention is to throw an exception from inside the trigger), you can't manipulate rows in a table from within a trigger defined against that same table in MySQL. However, it seems like this isn't your actual objective.
I have a system, which after using the INSERT / UPDATE this table because a bug on my system. I need to make this table is empty all the time, or that all INSERT / UPDATE to be used it is canceled.
The problem you described here sounds like a job for the BLACKHOLE
Storage Engine, which should be built in to just about any MySQL distribution. If SHOW ENGINES;
lists BLACKHOLE
, then you have this available.
If you provision this table to be a blackhole table, it will behave as you describe -- inserts "succeed" (don't return an error) but anything you insert into a table like this immediately disappears. The table otherwise interacts with queries as if it were just always empty.
The BLACKHOLE
storage engine acts as a “black hole” that accepts data but throws it away and does not store it. Retrievals always return an empty result.
INSERT INTO
a blackhole table appears to succeed, but whatever you inserted is immediately and automatically discarded.
SELECT
from a blackhole table always returns 0 rows.
DELETE
from a blackhole table always deletes 0 rows.
UPDATE
on a blackhole table always has 0 rows affected.
Your fix will be to eliminate the trigger you've been working on, and do this:
ALTER TABLE player_viplist ENGINE=BLACKHOLE;
Warning: this advice is based on my interpretation of your needs, and issuing that query will discard any current data in the `player_viplist` table. Do this only if that's what you want.
If you want to get right down to it, that's not the best approach from a theoretical perspective, because you're copying data around in your database that you really should be deriving.
CREATE ALGORITHM=MERGE VIEW products_with_packaging_info AS
SELECT p.*,
pt.width as packaging_width,
pt.height as packaging_height,
pt.weight as packaging_weight,
pt.case_count AS packaging_case_count
FROM PRODUCTS p
JOIN PACK_TYPES pt ON pt.id = p.packaging_type;
Done. SELECT
queries against this view work exactly the same as queries against either table individually, as long as every product has a pack type. Queries against this view can still take advantage of the indexes on the base tables, and there's no overhead involved with copying the attributes from one table to another, which always has the potential for update anomalies.
You might even be surprised to find that the columns in the view can actually be updated as if it were a table, with updates propagating down into the base tables.
I offer this suggestion because a well-designed database should be such that it is impossible to get two different answers to the same question. For example, if a PACK_TYPES row is changed because an error is found, how do its new values propagate backwards into products?
But if you really want to take the trigger approach, that looks something like this:
DELIMITER $$
DROP TRIGGER IF EXISTS PRODUCTS_bu $$
CREATE TRIGGER PRODUCTS_bu BEFORE UPDATE ON PRODUCTS FOR EACH ROW
BEGIN
IF NOT (NEW.packaging_type <=> OLD.packaging_type) THEN
BEGIN
DECLARE my_width INT DEFAULT NULL; -- using
DECLARE my_height INT DEFAULT NULL; -- the
DECLARE my_weight INT DEFAULT NULL; -- appropriate
DECLARE my_case_count INT DEFAULT NULL; -- data types here
SELECT width, height, weight, case_count
FROM PACK_TYPES
WHERE id = NEW.packaging_type
INTO my_width, my_height, my_weight, my_case_count;
SET NEW.width = my_width, NEW.height = my_height, NEW.weight = my_weight, NEW.case_count = my_case_count;
END;
END IF;
END $$
DELIMITER ;
The <=>
"spaceship" is the "null-safe equality operator" which constrains "NOT [possibly null] = [possibly null]" to always be either TRUE
or FALSE
; this is needed because [possibly null] != [possibly null] will never be true if either expression is NULL
. This is the case because, logically, "NOT (FALSE)" is "TRUE" while "NOT (NULL)" is "NULL."
I could have declared the variables at the beginning and avoided the inner BEGIN
/END
but it seems optimal to avoid that work until we know we actually need to execute the inner logic in the first place, which is avoided whenever 'packaging_type' hasn't actually changed on a row for a given update query. Within a block, declarations have to precede other statements, so delaying the declarations requires the addition of the inner BEGIN
/END
.
You would also want a similar trigger for BEFORE INSERT
which would be identical except you'd remove the 4 lines starting with IF
... BEGIN
... END
... END IF
from the body of the procedure, use a new trigger name, and change BEFORE UPDATE
to BEFORE INSERT
.
It's BEFORE
-- not AFTER
-- in both cases, because the trigger fires BEFORE
the newly-inserted or newly-updated row is written to the database.
Best Answer
You can also change values using
new
. You just have to use notafter
butbefore
trigger.