For the table mydb.mytable, run this query:
SELECT update_time
FROM information_schema.tables
WHERE table_schema='mydb'
AND table_name='mytable';
If you want to know what tables have changed in the last 5 minutes, run this:
SELECT table_schema,table_name,update_time
FROM information_schema.tables
WHERE update_time > (NOW() - INTERVAL 5 MINUTE);
Give it a Try !!!
UPDATE 2011-12-21 20:04 EDT
My employer (DB/Wweb hosting comany) has a client with 112,000 InnoDB tables. It is very difficult to read INFORMATION_SCHEMA.TABLES during peak hours. I have an alternate suggestion:
If you have innodb_file_per_table enabled and all the InnoDB tables are stored in .ibd
files, there is a way to ascertain the time of the last update (up to the minute).
For the table mydb.mytable, do the following in the operating system:
$ cd /var/lib/mysql/mydb
$ ls -l mytable.ibd | awk '{print $4,$5}'
This timestamp is from the OS. You can't go wrong on this one.
UPDATE 2011-12-21 22:04 EDT
[mysqld]
innodb_max_dirty_pages_pct=0;
Add this to my.cnf, restart mysql, and all InnoDB tables will experience fast flushes from the buffer pool.
To avoid restarting, just run
mysql> SET GLOBAL innodb_max_dirty_pages_pct=0;
UPDATE 2013-06-27 07:15 EDT
When it comes to retrieving the date and time for a file, ls has the --time-style
option:
$ cd /var/lib/mysql/mydb
$ ls -l --time-style="+%s" mytable.ibd | awk '{print $6}'
You can compare the timestamp of the file against UNIX_TIMESTAMP(NOW()).
- These are some tools suggested by MySQL Performance blog post:
http://www.mysqlperformanceblog.com/tools/
pt-query-digest can be useful to review queries as well
It can review a slow log, saving results to the test.query_review table in a MySQL server running on host1. See --review for more on reviewing queries:
pt-query-digest --review h=host1,D=test,t=query_review /path/to/slow.log
Using Show Profiles also you can review query performance
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
SET PROFILING = 1;
SET profiling_history_size = 0;
SET profiling_history_size = 15;
<Your own query>;
SHOW PROFILES;
select state, round(sum(duration),5) as `duration (summed) in sec` from information_schema.profiling where query_id = 11 group by state order by `duration (summed) in sec` desc;
SET PROFILING = 0;
When EXPLAIN is used with the EXTENDED keyword, the output includes a filtered column not otherwise displayed. This column indicates the estimated percentage of table rows that will be filtered by the table condition. In addition, the statement produces extra information that can be viewed by issuing a SHOW WARNINGS statement following the EXPLAIN statement. The Message value in SHOW WARNINGS output displays how the optimizer qualifies table and column names in the SELECT statement, what the SELECT looks like after the application of rewriting and optimization rules, and possibly other notes about the optimization process.
Best Answer
You can use pt-query-digest --processlist
While there are four different ways to use it, IMHO the best way to use it is to poll the processlist and collect metrics every hour. This method does not need a slow query log.
I have suggested this before (See mysql 5.7 general_log table has user_host but the file does not)
I have promoted this idea many times over the years.