Your first query is asking for 20,000 rows. what is being victimized here ???
Your Innodb Buffer Pool and Log Files.
Because of SELECT * FROM history WHERE log LIKE '%a%' ORDER BY log ASC LIMIT 0, 20000;
, all the data pages for the history table is hogging InnoDB resources.
Since you are using Amazon RDS, you cannot resize the InnoDB Log Files. They are always 128M under all RDS MySQL models. Your buffer pool may or may not be full (You have 5882511360 set for it, which is 5610M or 5.4785G). Naturally, bigger model means more IOPs.
You can run this to see how full the buffer pool is
SELECT FORMAT(A.num * 100.0 / B.num,2) BufferPoolFullPct FROM
(SELECT variable_value num FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_data') A,
(SELECT variable_value num FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_total') B;
Another aspect I see is the log
field (TEXT). You are asking for an ORDER BY
on this big column.
You need to change the first query to throttle the SELECT. In other words, fetch smaller chunks of history.
If you cannot change the data or the query, there is nothing you can configure except to shift to m1.xlarge (Comes with 11922309120 as the Buffer Pool Size = 11370M = 11.1035G) although log file size would still be stuck at 128M.
Infrastructure aside, it is possible for SELECTs to block INSERTs, UPDATEs, and DELETEs in InnoDB.
I wrote about this before
I hate the checking permissions issue.
You may have to disable key checks before the DROP DATABASE
SET unique_checks = 0;
SET foreign_key_checks = 0;
SET GLOBAL innodb_stats_on_metadata = 0;
DROP DATABASE db_madeintouch;
SET GLOBAL innodb_stats_on_metadata = 1;
SET foreign_key_checks = 1;
SET unique_checks = 1;
UPDATE 2013-04-15 18:04 EDT
I just noticed you have innodb_file_per_table OFF. What gives ?
- You currently have all the InnoDB data and the corresponding index sitting in a single file.
- Any CREATE TABLE statement must make data dictionary updates and look for space (small but annoying in this instance)
- Internal Fragmentation of ibdata1
- Dropping a table means scanning the table and its indexes for availability to lock. With data and index pages possibly fragmented, this takes spindles, seek time, and latency.
- See Pictorial Representation of ibdata1 to see everything that goes into ibdata1
Recommendation : Remove all Data and Index Pages from ibdata1
This will give ibdata1 a breather to handle just data dictionary and MVCC management. In addition, ibdata1 will stay rather lean and mean and can be read more quickly.
You will need to perform the InnoDB Infrastructure Cleanup. I wrote out all the steps back on October 29, 2010 in StackOverflow.
UPDATE 2013-04-22 08:10 EDT
Three suggestions
SUGGESTION 1 : I just noticed something else. You are using an ancient version of MySQL (5.0.45). You should think about upgrading to MySQL 5.6.11 as it performs significantly faster that MySQL 5.5 and way faster than MySQL 5.0.
SUGGESTION 2 : You should also go ahead and implement the InnoDB Infrastructure Cleanup.
SUGGESTION 3 : You should also check the disk itself. If the data is sitting on a RAID10 set, one of the disks may have an issues. Check the disk controller's battery as well because it can slow down disk caching and affect read performance.
Best Answer
mysqldump, or .sql files, which is what that module uses is probably the least efficient way to import a database (the only less efficient way I can think of is to import and commit each row at once). If you want to speedup the import process, you should change the method.
There are several things that you can do in the MySQL configuration that will speed up the import, but you will be missing the most important one: parallel import. If you can import using the same file in 4-8 threads, you will not get a linear performance grow, but it will scale without problems. Split the file in different tables/partitions and you will get almost a linear speedup. With specialized tools like mydumper you can get 5-10x speedup. With raw backup tools you would probably get almost-infinite speedup.
Regarding configuration, which is probably the only thing you are interested in, you can set:
innodb_flush_log_at_trx_commit = 2
. This is the most important suggestion. Makes the writes inconsistent with disk, but provides a great improvement if you are not using a hardware cache. It dependes on how slow is your disk, but can give you a 10x speedup. Not safe for normal operation, many people use it for unimportant data.innodb_doublewrite = 0
. Disables corruption checks. Very unsafe, and likely to corrupt your database in case of a crash. Can give you in your case around 2x speedup. Do not run MySQL never on this mode except for the import.checksums = none
(the syntax may have changed in 10). Disables page checksum. it wont't be very useful, as your CPU is not the bottleneckperformance_schema = off
- not sure the default in Maria 10, probably can be disabled for the import without problems, but it won't give you a large speedupinnodb_buffer_pool_size
andinnodb_log_file_size
they are already large on your setup -way larger than your data-, so no need to change in your case)There are other things, but they require changing the file, like creating the secondary indexes afterwards.