Your first query is asking for 20,000 rows. what is being victimized here ???
Your Innodb Buffer Pool and Log Files.
Because of SELECT * FROM history WHERE log LIKE '%a%' ORDER BY log ASC LIMIT 0, 20000;
, all the data pages for the history table is hogging InnoDB resources.
Since you are using Amazon RDS, you cannot resize the InnoDB Log Files. They are always 128M under all RDS MySQL models. Your buffer pool may or may not be full (You have 5882511360 set for it, which is 5610M or 5.4785G). Naturally, bigger model means more IOPs.
You can run this to see how full the buffer pool is
SELECT FORMAT(A.num * 100.0 / B.num,2) BufferPoolFullPct FROM
(SELECT variable_value num FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_data') A,
(SELECT variable_value num FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_total') B;
Another aspect I see is the log
field (TEXT). You are asking for an ORDER BY
on this big column.
You need to change the first query to throttle the SELECT. In other words, fetch smaller chunks of history.
If you cannot change the data or the query, there is nothing you can configure except to shift to m1.xlarge (Comes with 11922309120 as the Buffer Pool Size = 11370M = 11.1035G) although log file size would still be stuck at 128M.
Infrastructure aside, it is possible for SELECTs to block INSERTs, UPDATEs, and DELETEs in InnoDB.
I wrote about this before
Write a script that will create a script
MYSQL_CONN="-hAmazonRDSDNSName -u... -p..."
SQL="SELECT CONCAT('OPTIMIZE TABLE ',table_schema,'.',table_name,';')"
SQL="${SQL} OptimizeTableSQL FROM information_schema.tables WHERE table_schema = 'signs'"
SQL="${SQL} ORDER BY (data_length+index_length)"
mysql ${MYSQL_CONN} -AN -e"${SQL}" > /root/OptimizeAllTables.sql
less /root/OptimizeAllTables.sql
When you are satisfied that the script will do the job then do this:
mysql ${MYSQL_CONN} < /root/OptimizeAllTables.sql
Give it a Try !!!
CAVEAT
Since the default for innodb_file_per_table is ON, each InnoDB table being optimized will actually have its .ibd
file shrunk. Any MyISAM tables (which I hope you don't have) will also have their .MYD
and .MYI
files shrunk as well.
UPDATE 2012-09-18 18:19 EDT
I just tried the following:
- Created Instance1 with 5G Disk
- Modified Instance1 to 10G Disk
- Modified Instance1 to 5G Disk (error message :
Requested storage size (5) cannot be less then the current storage size (10)
)
- Create Instance2 with 5G
- Loaded Instance1 with 2MB data
- Created Snapshot of Instance1 (same size : 10G)
If I restore the snapshot, it creates a third Instance. DB Snapshot must be the whole Server Instance, not a dump of any kind.
You will have to do the following messy procedure:
- mysqldump the data from the 20GB Instance
- Create a New 10G Instance
- load the mysqldump into the new 10GB instance
- verify all the data you want is present
- destroy the old 20GB
At step 4, please run this Big Storage Engine Space Query on both the 20GB and 10GB instances
SELECT IFNULL(B.engine,'Total') "Storage Engine",
CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(
FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(
FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Table Size" FROM
(SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,
SUM(data_length+index_length) TSize FROM
information_schema.tables WHERE table_schema NOT IN
('mysql','information_schema','performance_schema') AND
engine IS NOT NULL GROUP BY engine WITH ROLLUP) B,
(SELECT 3 pw) A ORDER BY TSize;
You will see how much wasted space disappeared when migrating the mysqldump to the new 10GB Instance. This will definitely defragment all InnoDB tables. You may need to run the above script (/root/OptimizeAllTables.sql
) I gave you above to run OPTIMIZE TABLE
on all the tables in the signs
database once a week to keep the .ibd
as small as possible. You can also run the Big Storage Engine Space Query to Monitor when it starts approaching 10GB.
Best Answer
SHOW GLOBAL STATUS;
Then compute these (whether using MyISAM or InnoDB):
Qcache_lowmem_prunes / Uptime
-- How often the QC is being pruned -- More than 15/sec says there is a lot of overhead in having the QC on.Qcache_not_cached / Uptime
-- Cache attempts failed (per second). >40 is probably bad.Qcache_not_cached / (Qcache_hits + Com_select + Qcache_not_cached)
-- Percent ofSELECTs
that are not cached in the QC -- >30% means the QC is not very useful.Qcache_hits / Qcache_inserts
-- Hit to insert ratio -- > 10 is desirable