While I don't know of any tool that provides this information specifically, Percona Server does have a setting (log_slow_verbosity=full) that enables extended statisitics in the slow log along with microsecond granularity. The data does not explicitly include CPU utilization, but it does provide a number of additional statistics about the internals of MySQL and InnoDB that may help you determine whether or not the problem is CPU related or a result of poor optimization.
This is an example of the additional information in the slow log:
# Time: 120114 6:34:33
# User@Host: user[user] @ [10.10.10.10]
# Thread_id: 28313080 Schema: mydb Last_errno: 0 Killed: 0
# Query_time: 0.588882 Lock_time: 0.000068 Rows_sent: 3 Rows_examined: 183839 Rows_affected: 0 Rows_read: 100
# Bytes_sent: 121 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0
# InnoDB_trx_id: 9903E4DB1
# QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No
# Filesort: No Filesort_on_disk: No Merge_passes: 0
# InnoDB_IO_r_ops: 0 InnoDB_IO_r_bytes: 0 InnoDB_IO_r_wait: 0.000000
# InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000
# InnoDB_pages_distinct: 11359
Also, pt-query-digest from the Percona Toolkit understands this additional information and will summarize the slow query log for you and produce output, per query, that looks like this:
# Query 1: 0.09 QPS, 0.07x concurrency, ID 0x20112A1CCDBDBA4E at byte 4337645
# Scores: Apdex = 1.00 [1.0], V/M = 0.01
# Query_time sparkline: | ^ |
# Time range: 2012-01-14 06:34:33 to 08:51:57
# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count 63 775
# Exec time 59 539s 522ms 855ms 695ms 816ms 87ms 640ms
# Lock time 18 60ms 43us 498us 76us 103us 36us 66us
# Rows sent 1 3.15k 0 38 4.17 12.54 4.90 2.90
# Rows examine 22 135.93M 179.38k 179.90k 179.60k 174.27k 0 174.27k
# Rows affecte 0 0 0 0 0 0 0 0
# Rows read 0 3.43k 0 118 4.53 20.43 11.93 0.99
# Bytes sent 1 103.07k 82 576 136.18 246.02 63.85 118.34
# Merge passes 0 0 0 0 0 0 0 0
# Tmp tables 0 0 0 0 0 0 0 0
# Tmp disk tbl 0 0 0 0 0 0 0 0
# Tmp tbl size 0 0 0 0 0 0 0 0
# Query size 5 222.51k 294 294 294 294 0 294
# InnoDB:
# IO r bytes 0 0 0 0 0 0 0 0
# IO r ops 0 0 0 0 0 0 0 0
# IO r wait 0 0 0 0 0 0 0 0
# pages distin 63 8.44M 10.66k 12.72k 11.15k 11.34k 357.67 10.80k
# queue wait 0 0 0 0 0 0 0 0
# rec lock wai 0 0 0 0 0 0 0 0
# String:
# Databases mydb
# Hosts
# InnoDB trxID 9903E4DB1 (1/0%), 9903E4E09 (1/0%)... 773 more
# Last errno 0
# Users user
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms
# 100ms ################################################################
# 1s
# 10s+
# Tables
# SHOW TABLE STATUS FROM `mydb` LIKE 'table'\G
# SHOW CREATE TABLE `mydb`.`table`\G
# EXPLAIN /*!50100 PARTITIONS*/
...followed by the query itself.
I have blogged a bit about using pt-query-digest and other tools for finding performance bottlenecks (and promised more blogging, but haven't gotten around to it yet).
Another tool which may help you is pt-stalk and pt-collect. This will allow you to set a threshold based on a running server variable, such as when the number of concurrently running threads jumps up (Threads_running) and then collect a wealth of data about the running system, including cpu, I/O, MySQL statistics, the running processlist, etc to give you the ability to perform a thorough post-mortem.
The additional statistics are only available in Percona Server, but these tools will all work with MySQL GA.
Best Answer
This is MUCH more simply done this way:
cd to mysql folders:
in my.cnf tell it:
Then, configure your syslog/syslog-ng to read the FIFO pipe and do with it as it will..
In my case, I pipe it across net to a centralized server with only the error logs and slow query logs however.
In situations where you want to also keep local copy, just set it to output to table and file as described above.