I hate the checking permissions issue.
You may have to disable key checks before the DROP DATABASE
SET unique_checks = 0;
SET foreign_key_checks = 0;
SET GLOBAL innodb_stats_on_metadata = 0;
DROP DATABASE db_madeintouch;
SET GLOBAL innodb_stats_on_metadata = 1;
SET foreign_key_checks = 1;
SET unique_checks = 1;
UPDATE 2013-04-15 18:04 EDT
I just noticed you have innodb_file_per_table OFF. What gives ?
- You currently have all the InnoDB data and the corresponding index sitting in a single file.
- Any CREATE TABLE statement must make data dictionary updates and look for space (small but annoying in this instance)
- Internal Fragmentation of ibdata1
- Dropping a table means scanning the table and its indexes for availability to lock. With data and index pages possibly fragmented, this takes spindles, seek time, and latency.
- See Pictorial Representation of ibdata1 to see everything that goes into ibdata1
Recommendation : Remove all Data and Index Pages from ibdata1
This will give ibdata1 a breather to handle just data dictionary and MVCC management. In addition, ibdata1 will stay rather lean and mean and can be read more quickly.
You will need to perform the InnoDB Infrastructure Cleanup. I wrote out all the steps back on October 29, 2010 in StackOverflow.
UPDATE 2013-04-22 08:10 EDT
Three suggestions
SUGGESTION 1 : I just noticed something else. You are using an ancient version of MySQL (5.0.45). You should think about upgrading to MySQL 5.6.11 as it performs significantly faster that MySQL 5.5 and way faster than MySQL 5.0.
SUGGESTION 2 : You should also go ahead and implement the InnoDB Infrastructure Cleanup.
SUGGESTION 3 : You should also check the disk itself. If the data is sitting on a RAID10 set, one of the disks may have an issues. Check the disk controller's battery as well because it can slow down disk caching and affect read performance.
Observations:
Version: 5.5.41-0ubuntu0.14.04.1
64 GB of RAM
Uptime = 3d 03:40:44
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues
Increase innodb_buffer_pool_size
to at least 1G (not more than 45G).
Sounds like most of your 64GB is unused? Or do you expect a huge growth in data?
Turn off the Query Cache; it does not seem to be useful:
query_cache_size = 0
query_cache_type = OFF
Set long_query_time = 2
and turn on the slowlog.
After a day, run pt-query-digest
to find the worst queries.
Then work on optimizing them. (Composite indexes, reformulating queries, etc. Ask for help if needed.)
Supporting details and other suggestions
( innodb_buffer_pool_size / _ram ) = 128M / 65536M = 0.20% -- % of RAM used for InnoDB buffer_pool
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) / _ram ) = (16M / 0.20 + 128M / 0.70) / 65536M = 0.40% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
[!!] InnoDB buffer pool / data size: 128.0M/873.0M
So innodb_buffer_pool_size should be at least 1G.
[!!] InnoDB Write buffer efficiency: 0.00% (0 hits/ 1 total)
0 out of 1 -- bogus to mark it "!!"
( Innodb_pages_written / Innodb_buffer_pool_write_requests ) = 2,068,247 / 5340758 = 38.7% -- Write requests that had to hit disk
-- Check innodb_buffer_pool_size
( innodb_file_per_table ) = OFF -- Put each file in its own tablespace
-- (Mildly recommended, especially for large tables)
( Qcache_hits / Qcache_inserts ) = 1,765,289 / 2,509,742 = 0.703 -- Hit to insert ratio -- high is good
-- Consider turning off the query cache.
( (query_cache_size - Qcache_free_memory) / Qcache_queries_in_cache / query_alloc_block_size ) = (16M - 14960424) / 1477 / 8192 = 0.15 -- query_alloc_block_size vs formula
-- Adjust query_alloc_block_size
[!!] Query cache prunes per day: 15154
( Created_tmp_disk_tables / (Created_tmp_disk_tables + Created_tmp_tables) ) = 3,090 / (3090 + 4930) = 38.5% -- Percent of temp tables that spilled to disk
-- maybe increase tmp_table_size and max_heap_table_size; avoid blobs, etc.
( Select_scan ) = 459,980 / 272444 = 1.7 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)
( Select_scan / Com_select ) = 459,980 / 2518409 = 18.3% -- % of selects doing full table scan. (May be fooled by Stored Routines.)
-- Add indexes / optimize queries
( binlog_format ) = STATEMENT -- STATEMENT/ROW/MIXED. ROW is preferred; it may become the default.
( log_slow_queries ) = OFF -- Whether to log slow queries. (Before 5.1.29, 5.6.1)
( slow_query_log ) = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10.000000 = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( Connections ) = 2,305,560 / 272,444 = 8.5 /sec -- Connections
-- Increase wait_timeout; use pooling?
( Max_used_connections ) = 66 -- How many simultaneous connections you had (highwater mark).
[!!] Highest connection usage: 100% (152/151)
Those disagree; perhaps you restarted?
( open_files_limit ) = 1,024 -- ulimit -n
-- To allow more files, change ulimit or /etc/security/limits.conf or in sysctl.conf (kern.maxfiles & kern.maxfilesperproc) or something else (OS dependent)
-- If you get more tables, raising this will be important.
( Opened_tables ) = 43/hour
[!!] Table cache hit rate: 6% (400 open / 5K opened)
These disagree; perhaps there is no problem here.
[!!] Key buffer used: 18.3% (3M used / 16M cache)
[!!] Read Key buffer hit rate: 91.7% (743 cached / 62 reads)
Not a problem since there is virtually no MyISAM usage and key_buffer_size is only a tiny percentage of RAM.
19 issues flagged, out of 133 computed Variables/Status/Expressions
My take on Tuner's recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance -- waste of time!
Enable the slow query log to troubleshoot bad queries -- yes
Reduce or eliminate persistent connections to reduce connection usage -- ok
Adjust your join queries to always utilize indexes -- yes (use slowlog to find them)
When making adjustments, make tmp_table_size/max_heap_table_size equal -- ok
Reduce your SELECT DISTINCT queries which have no LIMIT clause -- yawn
Increase table_open_cache gradually to avoid file descriptor limits --
ok, but may not be critical
Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
Beware that open_files_limit (1024) variable -- noted
should be greater than table_open_cache ( 400) -- it is
max_connections (> 151) -- No; figure out why there are so many connections
wait_timeout ( 16M) -- probably not important
join_buffer_size (> 128.0K, or always use indexes with joins) -- might help, might not
tmp_table_size (> 16M) -- first look for other ways to improve queries
max_heap_table_size (> 16M)
table_open_cache (> 400)
innodb_buffer_pool_size (>= 872M) if possible. -- yes
Best Answer
There may be various causes for this difference, but the first thing that comes to mind is that the second configuration is tuned to use less memory (e.g. innodb_buffer_pool_size, tmp_table_size). Therefore, it may need to use more disk IO to process the same amount of data. CPU has to wait for disk operations to complete before it may proceed to the next chunk of data and hence it becomes underutilized if there are lots of them.
You may try to confirm this conjecture by establishing disk performance monitoring approximately like described in https://blogs.technet.microsoft.com/askcore/2012/03/16/windows-performance-monitor-disk-counters-explained/ .
And I would say 100% utilization of one CPU core is a normal thing with bulk operations in MySQL. The engine doesn’t have much parallelization so it generally can’t use more than one core per session. Lower utilization during bulk operations (e.g. mass insert) means there’s non-CPU bottleneck that needs addressing (because it makes the same operation take longer). You may try to figure out if that’s the case by comparing the time it takes the test script to complete in the two environments.