Mysql – Performance of a Trigger vs Stored Procedure in MySQL

MySQLperformancestored-procedurestrigger

A post here on DBA.StackExchange (What are the best practices for triggers to maintain a revision number on records?) has spawned an interesting question (at least, interesting to me) regarding performance in MySQL.

The context is that we want to insert a record in a table for each row that is updated. Before the row is updated, we want to store off a previous value and then increment one of the columns (a "version" column).

If we do this inside a trigger, it works out nicely. For MySQL, the triggers are row by row, so it would be an easy solution. Select the data currently in the table, insert it into the logging table, and the update the "version" column in the new data.

However, it's possible to move this logic to a stored procedure. If you do that, you're performing the insert, then incrementing the "version" column in the table. The entire thing would be set based.

So, when it comes to performing this insert, would this be more performant to use the set-based stored procedure approach or a trigger based approach?

This question is for MySQL (since it has row-by-row triggers), although it could apply to other row-by-row trigger DBMSs.

Best Answer

For the sake of simplicity, triggers are the way to go for implementing any kind of tracking of database changes. However, you need to be aware of what happens under the hood when you use triggers.

According to MySQL Stored Procedure Programming, page 256 under the head "Trigger Overhead" says the following:

It is important to remember that, by necessity, triggers add overhead to the DML statement to which they apply. the actual amount of overhead will depend upon the nature of the trigger, but --- as all MySQL triggers execute FOR EACH ROW --- the overhead can rapidly accumulate for statements that process large numbers of rows. You should therefore avoid placing any expensive SQL statements or procedural code in triggers.

An expanded explanation of trigger overhead is given on pages 529-531. The conclulding point from that section states the following:

The lesson here is this: since the trigger code will execute once for every row affected by a DML statement, the trigger can easily become the most significant factor in DML performance. Code inside the trigger body needs to be as lightweight as possible and -- in particular -- any SQL statements in the trigger should be supported by indexes whenever possible.

Not mentioned in the book is another factor when using triggers: When it comes to audit logging, please be aware of what you log data into. I say this because should you choose to log to a MyISAM table, each INSERT into a MyISAM table produces a full table lock during the INSERT. This can become a serious bottleneck in a high-traffic, high-transaction environment. Additionally, if the trigger is against an InnoDB table and you log changes in MyISAM from within the trigger, this will secretly disabled ACID compliance (i.e., reduce block transactions to autocommit behavior), which is cannot be rolled back.

When using triggers on InnoDB tables and logging changes

  • The table you log to is also InnoDB
  • You have autocommit turned off
  • You setup START TRANSACTION...COMMIT/ROLLBACK blocks thoroughly

In this way, audit logs can benefit from COMMIT/ROLLBACK as would main tables.

Concerning using stored procedures, you would have to painstakingly call the stored procedure at every point of DML against the table being tracked. One could easily miss logging changes in the face of tens of thousands of lines of application code. Placing such code in a trigger eliminates finding all those DML statements.

CAVEAT

Depending on how complex the trigger is, it can still be a bottleneck. If you want to reduce bottlenecks in audit logging, there is something you can do. However, it will require a little infrastructure change.

Using commodity hardware, create two more DB Servers

This will server to reduce write I/O on the main database (MD) due to audit logging. Here is how you can accomplish it:

Step 01) Turn on binary logging in the main database.

Step 02) Using an inexpensive server, setup MySQL (same version as MD) with binary logging enabled. This will be DM. Setup replication from MD to DM.

Step 03) Using a second inexpensive server, setup MySQL (same version as MD) with binary logging disabled. Setup each audit table to use --replicate-do-table. This will be AU. Setup replication from DM to AU.

Step 04) mysqldump the table structures from MD and load it into DM and AU.

Step 05) Convert all audit tables in MD to use the BLACKHOLE storage engine

Step 06) Convert all tables in DM and AU to use the BLACKHOLE storage engine

Step 07) Convert all audit tables in AU to use the MyISAM storage engine

When done

  • DM will replicate from MD and record stuff in its binary log only
  • With --replicate-do-table filter on all audit tables, AU will replicate from DM

What this does is store audit info on a separate DB server and also reduce any write I/O degradation that MD would normally have.