I hate the checking permissions issue.
You may have to disable key checks before the DROP DATABASE
SET unique_checks = 0;
SET foreign_key_checks = 0;
SET GLOBAL innodb_stats_on_metadata = 0;
DROP DATABASE db_madeintouch;
SET GLOBAL innodb_stats_on_metadata = 1;
SET foreign_key_checks = 1;
SET unique_checks = 1;
UPDATE 2013-04-15 18:04 EDT
I just noticed you have innodb_file_per_table OFF. What gives ?
- You currently have all the InnoDB data and the corresponding index sitting in a single file.
- Any CREATE TABLE statement must make data dictionary updates and look for space (small but annoying in this instance)
- Internal Fragmentation of ibdata1
- Dropping a table means scanning the table and its indexes for availability to lock. With data and index pages possibly fragmented, this takes spindles, seek time, and latency.
- See Pictorial Representation of ibdata1 to see everything that goes into ibdata1
Recommendation : Remove all Data and Index Pages from ibdata1
This will give ibdata1 a breather to handle just data dictionary and MVCC management. In addition, ibdata1 will stay rather lean and mean and can be read more quickly.
You will need to perform the InnoDB Infrastructure Cleanup. I wrote out all the steps back on October 29, 2010 in StackOverflow.
UPDATE 2013-04-22 08:10 EDT
Three suggestions
SUGGESTION 1 : I just noticed something else. You are using an ancient version of MySQL (5.0.45). You should think about upgrading to MySQL 5.6.11 as it performs significantly faster that MySQL 5.5 and way faster than MySQL 5.0.
SUGGESTION 2 : You should also go ahead and implement the InnoDB Infrastructure Cleanup.
SUGGESTION 3 : You should also check the disk itself. If the data is sitting on a RAID10 set, one of the disks may have an issues. Check the disk controller's battery as well because it can slow down disk caching and affect read performance.
...even surpassing it's theorically maximum possible allocation.
[OK] Maximum possible memory usage: 7.3G (46% of installed RAM)
There is not actually a way to calculate maximum possible memory usage for MySQL, because there is no cap on the memory it can request from the system.
The calculation done by mysqltuner.pl is only an estimate, based on a formula that doesn't take into account all possible variables, because if all possible variables were taken into account, the answer would always be "infinite." It's unfortunate that it's labeled this way.
Here is my theory on what's contributing to your excessive memory usage:
thread_cache_size = 128
Given that your max_connections
is set to 200, the value of 128 for thread_cache_size
seems far too high. Here's what makes me think this might be contributing to your problem:
When a thread is no longer needed, the memory allocated to it is released and returned to the system unless the thread goes back into the thread cache. In that case, the memory remains allocated.
http://dev.mysql.com/doc/refman/5.6/en/memory-use.html
If your workload causes even an occasional client thread to require a large amount of memory, those threads may be holding onto that memory, then going back to the pool and sitting around, continuing to hold on to memory they don't technically "need" any more, on the premise that holding on to the memory is less costly than releasing it if you're likely to need it again.
I think it's worth a try to do the following, after first making a note of how much memory MySQL is using at the moment.
Note how many threads are currently cached:
mysql> show status like 'Threads_cached';
+----------------+-------+
| Variable_name | Value |
+----------------+-------+
| Threads_cached | 9 |
+----------------+-------+
1 row in set (0.00 sec)
Next, disable the thread cache.
mysql> SET GLOBAL thread_cache_size = 0;
This disables the thread cache, but the cached threads will stay in the pool until they're used one more time. Disconnect from the server, then reconnect and repeat.
mysql> show status like 'Threads_cached';
Continue disconnecting, reconnecting, and checking until the counter reaches 0.
Then, see how much memory MySQL is holding.
You may see a decrease, possibly significant, and then again you may not. I tested this on one of my systems, which had 9 threads in the cache. Once those threads had all been cleared out of the cache, the total memory held by MySQL did decrease... not by much, but it does illustrate that threads in the cache do release at least some memory when they are destroyed.
If you see a significant decrease, you may have found your problem. If you don't, then there's one more thing that needs to happen, and how quickly it can happen depends on your environment.
If the theory holds that the other threads -- the ones currently servicing active client connections -- have significant memory allocated to them, either because of recent work in their current client session or because of work requiring a lot of memory that was done by another connection prior to them languishing in the pool, then you won't see all of the potential reduction in memory consumption until those threads are allowed to die and be destroyed. Presumably your application doesn't hold them forever, but how long it will take to know for sure whether there's a difference will depend on whether you have the option of cycling your application (dropping and reconnecting the client threads) or if you'll have to just wait for them to be dropped and reconnected over time on their own.
But... it seems like a worthwhile test. You should not see a substantial performance penalty by setting thread_cache_size
to 0. Fortunately, thread_cache_size
is a dynamic variable, so you can freely change it with the server running.
Best Answer
(adding a second answer -- to address the VARIABLES and STATUS)
Observations:
The More Important Issues
table_open_cache = 200K, yet 156 tables opened per second, and all are misses/overflows from the cache. You must redesign you schema to have a civilized number of tables.
Are there really 403
ROLLBACKs
per second? If so, think about how to avoid some of them. That's probably more than 10% of the transactions.innodb_lock_wait_timeout = 300 -- Assuming you really need transactions to hang around waiting for 5 minutes, this could be consuming RAM.
tmp_table_size = 8M, together with 51 tmp tables created per second, may be leading to quite a few GB of transient MEMORY tables. Look through the slowlog to find which queries need tmp tables and see if they can be rewritten.
Decrease long_query_time to 2 (now 10), turn on the slowlog, wait a day, run pt-query-digest. Then study the worst few queries -- improving them will speed up the entire system and probably decrease RAM usage.
Details and other observations
( Innodb_buffer_pool_reads ) = 329,533,959 / 1823180 = 180 /sec -- InnoDB buffer_pool I/O read rate -- check innodb_buffer_pool_size
( Innodb_buffer_pool_pages_flushed ) = 589,768,119 / 1823180 = 323 /sec -- Writes (flushes) -- check innodb_buffer_pool_size
( innodb_buffer_pool_size / _ram ) = 16,106,127,360 / 53248M = 28.8% -- % of RAM used for InnoDB buffer_pool
( Opened_tables ) = 285,544,112 / 1823180 = 156 /sec -- Frequency of opening Tables -- increase table_open_cache
( table_open_cache ) = 200,000 -- Number of table descriptors to cache -- Several hundred is usually good.
( Table_open_cache_overflows ) = 285,343,673 / 1823180 = 156 /sec -- May need to increase table_open_cache
( Table_open_cache_misses ) = 285,544,112 / 1823180 = 156 /sec -- May need to increase table_open_cache
( (Innodb_buffer_pool_reads + Innodb_buffer_pool_pages_flushed) ) = ((329533959 + 589768119) ) / 1823180 = 504 /sec -- InnoDB I/O -- Increase innodb_buffer_pool_size?
( innodb_log_buffer_size ) = 128M -- Suggest 2MB-64MB, and at least as big as biggest blob set in transactions. -- Adjust innodb_log_buffer_size.
( Innodb_log_writes ) = 167,769,519 / 1823180 = 92 /sec
( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 332,945,398,784 / (1823180 / 3600) / 2 / 100M = 3.13 -- Ratio
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 1,823,180 / 60 * 100M / 332945398784 = 9.57 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf. -- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size.
( Com_rollback ) = 735,486,482 / 1823180 = 403 /sec -- ROLLBACKs in InnoDB. -- An excessive frequency of rollbacks may indicate inefficient app logic.
( innodb_lock_wait_timeout ) = 300 -- Two battling InnoDB transactions, but not a deadlock -- one will wait this long (seconds) in hopes of getting the desired locks. -- Fix the cause of timeouts rather than increasing this value.
( Innodb_dblwr_writes ) = 14,574,293 / 1823180 = 8 /sec -- "Doublewrite buffer" writes to disk. "Doublewrites" are a reliability feature. Some newer versions / configurations don't need them. -- (Symptom of other issues)
( local_infile ) = ON -- local_infile = ON is a potential security issue
( Questions ) = 7,836,091,851 / 1823180 = 4298 /sec -- Queries (outside SP) -- "qps" -- >2000 may be stressing server
( Queries ) = 8,262,090,685 / 1823180 = 4531 /sec -- Queries (including inside SP) -- >3000 may be stressing server
( Created_tmp_tables ) = 92,537,439 / 1823180 = 51 /sec -- Frequency of creating "temp" tables as part of complex SELECTs.
( Created_tmp_disk_tables ) = 6,158,440 / 1823180 = 3.4 /sec -- Frequency of creating disk "temp" tables as part of complex SELECTs -- increase tmp_table_size and max_heap_table_size. Check the rules for temp tables being able to use MEMORY instead of MyISAM. It may be possible to make a minor schema or query change to avoid MyISAM. Better indexes and reformulation of queries can
( Handler_read_rnd_next ) = 266,467,437,770 / 1823180 = 146155 /sec -- High if lots of table scans -- possibly inadequate keys
( Com_rollback / Com_commit ) = 735,486,482 / 741864016 = 99.1% -- Rollback : Commit ratio -- Rollbacks are costly; change app logic
( Com_show_variables ) = 6,666,028 / 1823180 = 3.7 /sec -- SHOW VARIABLES ... -- Why are you requesting the VARIABLES so often?
( Select_scan ) = 315,076,005 / 1823180 = 172 /sec -- full table scans -- Add indexes / optimize queries (unless they are tiny tables)
( Select_scan / Com_select ) = 315,076,005 / 3888296954 = 8.1% -- % of selects doing full table scan. (May be fooled by Stored Routines.) -- Add indexes / optimize queries
( Com_insert + Com_delete + Com_delete_multi + Com_replace + Com_update + Com_update_multi ) = (613230844 + 1238003 + 0 + 30479625 + 71282659 + 0) / 1823180 = 392 /sec -- writes/sec -- 50 writes/sec + log flushes will probably max out I/O write capacity of normal drives
( ( Com_stmt_prepare - Com_stmt_close ) / ( Com_stmt_prepare + Com_stmt_close ) ) = ( 23 - 22 ) / ( 23 + 22 ) = 2.2% -- Are you closing your prepared statements? -- Add Closes.
( slow_query_log ) = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10.000000 = 10 -- Cutoff (Seconds) for defining a "slow" query. -- Suggest 2
( Com_change_db ) = 31,947,189 / 1823180 = 18 /sec -- Probably comes from USE statements. -- Consider connecting with DB, using db.tbl syntax, eliminating spurious USE statements, etc.
( max_connect_errors ) = 10,000 -- A small protection against hackers. -- Perhaps no more than 200.
( Connections ) = 21,121,749 / 1823180 = 12 /sec -- Connections -- Increase wait_timeout; use pooling?
( Threads_running - 1 ) = 22 - 1 = 21 -- Active threads (concurrency when data collected) -- Optimize queries and/or schema
You have the Query Cache half-off. You should set both query_cache_type = OFF and query_cache_size = 0 . There is (according to a rumor) a 'bug' in the QC code that leaves some code on unless you turn off both of those settings.
39 issues flagged, out of 125 computed Variables/Status/Expressions. Settings not mentioned are mostly OK.