Is there a way to fetch the amount of reads / writes made to the entire DB (MySql) in a given period, e.g. the amount of read / writes in the last hour?
MySQL – How to Monitor Reads/Writes to the Database
MySQLmysql-5.5
Related Solutions
Method 1
If you are using Percona Server or MariaDB (>= 5.2), you can simply set the userstat/userstat_running variable to enable a bunch of new INFORMATION_SCHEMA tables including one called TABLE_STATISTICS that provides exactly this information.
For example:
mysql> SELECT TABLE_NAME, ROWS_READ, ROWS_CHANGED, ROWS_CHANGED_X_INDEXES FROM TABLE_STATISTICS ORDER BY ROWS_CHANGED DESC LIMIT 5;
+-------------------+------------+--------------+------------------------+
| TABLE_NAME | ROWS_READ | ROWS_CHANGED | ROWS_CHANGED_X_INDEXES |
+-------------------+------------+--------------+------------------------+
| user | 21122527 | 5989231 | 23956924 |
| audit | 1208 | 5020929 | 20083716 |
| sometemp | 13995426 | 3182150 | 9546450 |
| creditcards | 3566482 | 2998976 | 11995904 |
| order | 2147483647 | 2662606 | 53252120 |
+-------------------+------------+--------------+------------------------+
ROWS_CHANGED would correspond to the most written to tables and ROWS_READ would be the most read from. You should also look at INDEX_STATISTICS to find your most and least used indexes.
See also the MariaDB user statistics documentation.
Method 2
If you are not using Percona Server, you could use pt-query-digest to capture a sample of your queries and then filter out only INSERT/UPDATE/DELETEs. That would look something like this:
mysql> SELECT @@GLOBAL.slow_query_log_file;
+------------------------------------------+
| @@GLOBAL.slow_query_log_file |
+------------------------------------------+
| /var/logs/mysql/slowquery.log |
+------------------------------------------+
1 row in set (0.00 sec)
mysql> SET GLOBAL slow_query_log_file='/tmp/allqueries.log';
mysql> SELECT @@GLOBAL.long_query_time;
+--------------------------+
| @@GLOBAL.long_query_time |
+--------------------------+
| 0.250000 |
+--------------------------+
1 row in set (0.00 sec)
mysql> SET GLOBAL long_query_time = 0;
mysql> FLUSH LOGS;
mysql> SLEEP 600; SET GLOBAL long_query_time = 0.25; SET GLOBAL slow_query_log_file='/var/logs/mysql/slowquery.log'; FLUSH LOGS;
Now you have a file, /tmp/allqueries.log
that contains every query executed on your server for ~10 minutes.
Next, analyze it with pt-query-digest to get the most frequently written to tables:
pt-query-digest /tmp/allqueries.log --group-by=distill --filter '$event->{arg} =~ m/^(update|delete|insert)/i' --limit 5 > /tmp/writes.txt
If you examine /tmp/writes.txt
, you will see a section near the top that looks like this:
# Profile
# Rank Query ID Response time Calls R/Call Apdx V/M Item
# ==== ======== ============= ===== ====== ==== ===== ====================
# 1 0x 0.0558 26.8% 282 0.0002 1.00 0.00 INSERT UPDATE user
# 2 0x 0.0448 21.5% 246 0.0002 1.00 0.00 UPDATE audit
# 3 0x 0.0228 10.9% 11 0.0021 1.00 0.00 UPDATE sometemp
# 4 0x 0.0108 5.2% 16 0.0007 1.00 0.00 UPDATE creditcards
# 5 0x 0.0103 4.9% 43 0.0002 1.00 0.00 UPDATE order
Roughly, these are your most written to tables for the duration of the sample you chose. To get the most read from tables (roughly), you can change the --filter
parameter to --filter '$event->{arg} =~ m/^select/i'
and you will see similar output.
If you are only interested in writes, you can pass a binary log into pt-query-digest
and get similar results:
mysqlbinlog mysql-bin.000511 | pt-query-digest --type=binlog --group-by=distill > /tmp/writes.txt
You can also get the same data with tcpdump and pt-query-digest --type=tcpdump
So, this being said, assuming that you are using InnoDB tables, I highly doubt that you will see much performance benefit from doing this at all. Because of the way data is buffered to the InnoDB log and then written to disk, I wouldn't expect much or any performance gain from moving individual tables around like this. You might see some benefit from moving the InnoDB log files themselves to separate, faster disk to separate the log read/writes from the tablespace read/writes, but even that is questionable. Investing in fast, high quality RAID arrays with a battery backed cache (or better yet, SSD) will be a better use of your resources.
All Data is InnoDB
This is what will give you an exact point-in-time snapshot of the data:
mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql
--single-transaction
produces a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.
--routines
dumps all stored procedures and stored functions
--triggers
dumps all triggers for each table that has them
All Data is MyISAM or Mix of InnoDB/MyISAM
You will have to impose a global read lock, perform the mysqldump, and release the global lock
mysql -uuser -ppass -Ae"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
sleep 5
mysql -uuser -ppass -ANe"SHOW PROCESSLIST" | grep "SELECT SLEEP(86400)" > /tmp/proclist.txt
SLEEP_ID=`cat /tmp/proclist.txt | awk '{print $1}'`
echo "KILL ${SLEEP_ID};" > /tmp/kill_sleep.sql
mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql
mysql -uuser -ppass -A < /tmp/kill_sleep.sql
Give it a Try !!!
UPDATE 2012-06-22 08:12 EDT
Since you have <50MB of total data I have another option. Instead of launching a SLEEP command into the background to hold the global read lock for 86400 sec (that 24 hr) just to get the process ID and kill outside, let's try setting a 5 second timeout in mysql rather than in the OS:
SLEEP_TIMEOUT=5
SQLSTMT="FLUSH TABLES WITH READ LOCK; SELECT SLEEP(${SLEEP_TIMEOUT})"
mysql -uuser -ppass -Ae"${SQLSTMT}" &
mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql
This is a cleaner and simpler approach for very small databases.
Related Question
- Mysql – WordPress MySQL writes during “Flush Tables With Read Lock” are rejected instead of being queued
- Total Number of Read/Writes in a Day in MySQL
- MySQL Replication – How to Detect Slave Writes
- Mysql – Does innodb_data_reads include reads from the buffer pool
- Undo or Edit Last Line of Multi-Line Query in MySQL
Best Answer
In the table
GLOBAL_STATUS
ininformation_schema
.Use the field
Com_select
for reads and sum the fieldsCom_delete
,Com_insert
,Com_update
,Com_replace
for writes.These values are cumulativ counters, you should subtract values between an interval:
if t1=t0+60seconds, Com_select at t1 - Com_select at t0 = reads in 1 minute