For the sake of simplicity, triggers are the way to go for implementing any kind of tracking of database changes. However, you need to be aware of what happens under the hood when you use triggers.
According to MySQL Stored Procedure Programming, page 256 under the head "Trigger Overhead" says the following:
It is important to remember that, by necessity, triggers add overhead
to the DML statement to which they apply. the actual amount of overhead
will depend upon the nature of the trigger, but --- as all MySQL
triggers execute FOR EACH ROW --- the overhead can rapidly accumulate
for statements that process large numbers of rows. You should
therefore avoid placing any expensive SQL statements or procedural
code in triggers.
An expanded explanation of trigger overhead is given on pages 529-531. The conclulding point from that section states the following:
The lesson here is this: since the trigger code will execute once
for every row affected by a DML statement, the trigger can easily
become the most significant factor in DML performance. Code inside the
trigger body needs to be as lightweight as possible and -- in
particular -- any SQL statements in the trigger should be supported by
indexes whenever possible.
Not mentioned in the book is another factor when using triggers: When it comes to audit logging, please be aware of what you log data into. I say this because should you choose to log to a MyISAM table, each INSERT into a MyISAM table produces a full table lock during the INSERT. This can become a serious bottleneck in a high-traffic, high-transaction environment. Additionally, if the trigger is against an InnoDB table and you log changes in MyISAM from within the trigger, this will secretly disabled ACID compliance (i.e., reduce block transactions to autocommit behavior), which is cannot be rolled back.
When using triggers on InnoDB tables and logging changes
- The table you log to is also InnoDB
- You have autocommit turned off
- You setup START TRANSACTION...COMMIT/ROLLBACK blocks thoroughly
In this way, audit logs can benefit from COMMIT/ROLLBACK as would main tables.
Concerning using stored procedures, you would have to painstakingly call the stored procedure at every point of DML against the table being tracked. One could easily miss logging changes in the face of tens of thousands of lines of application code. Placing such code in a trigger eliminates finding all those DML statements.
CAVEAT
Depending on how complex the trigger is, it can still be a bottleneck. If you want to reduce bottlenecks in audit logging, there is something you can do. However, it will require a little infrastructure change.
Using commodity hardware, create two more DB Servers
This will server to reduce write I/O on the main database (MD) due to audit logging. Here is how you can accomplish it:
Step 01) Turn on binary logging in the main database.
Step 02) Using an inexpensive server, setup MySQL (same version as MD) with binary logging enabled. This will be DM. Setup replication from MD to DM.
Step 03) Using a second inexpensive server, setup MySQL (same version as MD) with binary logging disabled. Setup each audit table to use --replicate-do-table. This will be AU. Setup replication from DM to AU.
Step 04) mysqldump the table structures from MD and load it into DM and AU.
Step 05) Convert all audit tables in MD to use the BLACKHOLE storage engine
Step 06) Convert all tables in DM and AU to use the BLACKHOLE storage engine
Step 07) Convert all audit tables in AU to use the MyISAM storage engine
When done
- DM will replicate from MD and record stuff in its binary log only
- With --replicate-do-table filter on all audit tables, AU will replicate from DM
What this does is store audit info on a separate DB server and also reduce any write I/O degradation that MD would normally have.
If you want to get right down to it, that's not the best approach from a theoretical perspective, because you're copying data around in your database that you really should be deriving.
CREATE ALGORITHM=MERGE VIEW products_with_packaging_info AS
SELECT p.*,
pt.width as packaging_width,
pt.height as packaging_height,
pt.weight as packaging_weight,
pt.case_count AS packaging_case_count
FROM PRODUCTS p
JOIN PACK_TYPES pt ON pt.id = p.packaging_type;
Done. SELECT
queries against this view work exactly the same as queries against either table individually, as long as every product has a pack type. Queries against this view can still take advantage of the indexes on the base tables, and there's no overhead involved with copying the attributes from one table to another, which always has the potential for update anomalies.
You might even be surprised to find that the columns in the view can actually be updated as if it were a table, with updates propagating down into the base tables.
I offer this suggestion because a well-designed database should be such that it is impossible to get two different answers to the same question. For example, if a PACK_TYPES row is changed because an error is found, how do its new values propagate backwards into products?
But if you really want to take the trigger approach, that looks something like this:
DELIMITER $$
DROP TRIGGER IF EXISTS PRODUCTS_bu $$
CREATE TRIGGER PRODUCTS_bu BEFORE UPDATE ON PRODUCTS FOR EACH ROW
BEGIN
IF NOT (NEW.packaging_type <=> OLD.packaging_type) THEN
BEGIN
DECLARE my_width INT DEFAULT NULL; -- using
DECLARE my_height INT DEFAULT NULL; -- the
DECLARE my_weight INT DEFAULT NULL; -- appropriate
DECLARE my_case_count INT DEFAULT NULL; -- data types here
SELECT width, height, weight, case_count
FROM PACK_TYPES
WHERE id = NEW.packaging_type
INTO my_width, my_height, my_weight, my_case_count;
SET NEW.width = my_width, NEW.height = my_height, NEW.weight = my_weight, NEW.case_count = my_case_count;
END;
END IF;
END $$
DELIMITER ;
The <=>
"spaceship" is the "null-safe equality operator" which constrains "NOT [possibly null] = [possibly null]" to always be either TRUE
or FALSE
; this is needed because [possibly null] != [possibly null] will never be true if either expression is NULL
. This is the case because, logically, "NOT (FALSE)" is "TRUE" while "NOT (NULL)" is "NULL."
I could have declared the variables at the beginning and avoided the inner BEGIN
/END
but it seems optimal to avoid that work until we know we actually need to execute the inner logic in the first place, which is avoided whenever 'packaging_type' hasn't actually changed on a row for a given update query. Within a block, declarations have to precede other statements, so delaying the declarations requires the addition of the inner BEGIN
/END
.
You would also want a similar trigger for BEFORE INSERT
which would be identical except you'd remove the 4 lines starting with IF
... BEGIN
... END
... END IF
from the body of the procedure, use a new trigger name, and change BEFORE UPDATE
to BEFORE INSERT
.
It's BEFORE
-- not AFTER
-- in both cases, because the trigger fires BEFORE
the newly-inserted or newly-updated row is written to the database.
Best Answer
Unfortunately, Dynamic SQL is not permitted in Triggers. Notwithstanding, what you are asking for is still possible, but you will have to apply some elbow grease.
I wrote an answer to the post Disable trigger for just one table. What I did was create a user variable
@TRIGGER_DISABLED
as a flag to control if the code within the trigger was to be executed.In your particular case, you will need 9 flags. You will also need a
BEFORE UPDATE
trigger:The
AFTER UPDATE
trigger looks like this:Just to be on the safe side, you should initialize these 9 user variables with 0.
That way, your session dictates the trigger's behavior.
Give it a Try !!!