I experimented with this and found something very interesting.
Running Fedora release 20 (Heisenbug) 3.11.10-301.fc20.x86_64, 2GB RAM (I know!), 2 processor Intel centrino.
If you enable the Performance Schema (P_S), the times seem to drop dramatically. Why this is, I simply don't know - the P_S is for monitoring, and not changing anything. I have put what I put in my my.cnf at the end.
Could you test this for your 3 scenarios (CREATE, TRUNCATE & DELETE) running your server using the new my.cnf. If you obtain similar results to mine, then we can investigate further.
Strangely, as far as I can see, the client reports a time of approx 0.3 seconds for me for all of my experiments - but the results from the profiling seem very different. Still puzzling through this one.
Below are my results just for the CREATE table statement before and after enabling the performance schema.
Before enabling P_S.
mysql> show profile for CREATE TABLE cache_bootstrap;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000290 |
| checking permissions | 0.000021 |
| Opening tables | 0.000141 |
| creating table | 0.365769 | <<====
| After create | 0.000037 |
| query end | 0.000023 |
| closing tables | 0.000018 |
| freeing items | 0.000114 |
| cleaning up | 0.000030 |
+----------------------+----------+
9 rows in set, 1 warning (0.00 sec)
After enabling P_S
mysql> show profile for CREATE TABLE cache_bootstrap;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000188 |
| checking permissions | 0.000131 |
| query end | 0.000015 |
| closing tables | 0.000020 |
| freeing items | 0.000068 |
| cleaning up | 0.000052 |
+----------------------+----------+
6 rows in set, 1 warning (0.00 sec)
MariaDB
Before enabling P_S
Query OK, 0 rows affected (0.33 sec)
MariaDB [test]> show profile for CREATE TABLE cache_bootstrap;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000321 |
| checking permissions | 0.000024 |
| Opening tables | 0.000073 |
| After opening tables | 0.000017 |
| System lock | 0.000011 |
| Table lock | 0.000019 |
| After opening tables | 0.000056 |
| creating table | 0.328769 | <<======
| After create | 0.000043 |
| query end | 0.000026 |
| closing tables | 0.000022 |
| freeing items | 0.000043 |
| updating status | 0.000050 |
| cleaning up | 0.000039 |
+----------------------+----------+
14 rows in set (0.00 sec)
After enabling P_S on MariaDB
MariaDB [test]> show profile for CREATE TABLE cache_bootstrap
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000311 |
| checking permissions | 0.000024 |
| Opening tables | 0.000044 |
| After opening tables | 0.000038 |
| System lock | 0.000014 |
| Table lock | 0.000012 |
| After opening tables | 0.000347 |
| After create | 0.000015 |
| query end | 0.000014 |
| closing tables | 0.000021 |
| freeing items | 0.000014 |
| updating status | 0.000026 |
| cleaning up | 0.000034 |
+----------------------+----------+
13 rows in set (0.00 sec)
To be added in the [mysqld] section of my.cnf
[mysqld]
performance_schema
performance_schema_instrument = '%=on'
performance_schema_consumer_events_stages_current = ON
performance_schema_consumer_events_stages_history = ON
performance_schema_consumer_events_stages_history_long = ON
performance_schema_consumer_events_statements_history = ON
performance_schema_consumer_events_statements_history_long = ON
performance_schema_consumer_events_waits_current = ON
performance_schema_consumer_events_waits_history = ON
performance_schema_consumer_events_waits_history_long = ON
Observations:
Version: 5.5.41-0ubuntu0.14.04.1
64 GB of RAM
Uptime = 3d 03:40:44
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues
Increase innodb_buffer_pool_size
to at least 1G (not more than 45G).
Sounds like most of your 64GB is unused? Or do you expect a huge growth in data?
Turn off the Query Cache; it does not seem to be useful:
query_cache_size = 0
query_cache_type = OFF
Set long_query_time = 2
and turn on the slowlog.
After a day, run pt-query-digest
to find the worst queries.
Then work on optimizing them. (Composite indexes, reformulating queries, etc. Ask for help if needed.)
Supporting details and other suggestions
( innodb_buffer_pool_size / _ram ) = 128M / 65536M = 0.20% -- % of RAM used for InnoDB buffer_pool
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) / _ram ) = (16M / 0.20 + 128M / 0.70) / 65536M = 0.40% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
[!!] InnoDB buffer pool / data size: 128.0M/873.0M
So innodb_buffer_pool_size should be at least 1G.
[!!] InnoDB Write buffer efficiency: 0.00% (0 hits/ 1 total)
0 out of 1 -- bogus to mark it "!!"
( Innodb_pages_written / Innodb_buffer_pool_write_requests ) = 2,068,247 / 5340758 = 38.7% -- Write requests that had to hit disk
-- Check innodb_buffer_pool_size
( innodb_file_per_table ) = OFF -- Put each file in its own tablespace
-- (Mildly recommended, especially for large tables)
( Qcache_hits / Qcache_inserts ) = 1,765,289 / 2,509,742 = 0.703 -- Hit to insert ratio -- high is good
-- Consider turning off the query cache.
( (query_cache_size - Qcache_free_memory) / Qcache_queries_in_cache / query_alloc_block_size ) = (16M - 14960424) / 1477 / 8192 = 0.15 -- query_alloc_block_size vs formula
-- Adjust query_alloc_block_size
[!!] Query cache prunes per day: 15154
( Created_tmp_disk_tables / (Created_tmp_disk_tables + Created_tmp_tables) ) = 3,090 / (3090 + 4930) = 38.5% -- Percent of temp tables that spilled to disk
-- maybe increase tmp_table_size and max_heap_table_size; avoid blobs, etc.
( Select_scan ) = 459,980 / 272444 = 1.7 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)
( Select_scan / Com_select ) = 459,980 / 2518409 = 18.3% -- % of selects doing full table scan. (May be fooled by Stored Routines.)
-- Add indexes / optimize queries
( binlog_format ) = STATEMENT -- STATEMENT/ROW/MIXED. ROW is preferred; it may become the default.
( log_slow_queries ) = OFF -- Whether to log slow queries. (Before 5.1.29, 5.6.1)
( slow_query_log ) = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10.000000 = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( Connections ) = 2,305,560 / 272,444 = 8.5 /sec -- Connections
-- Increase wait_timeout; use pooling?
( Max_used_connections ) = 66 -- How many simultaneous connections you had (highwater mark).
[!!] Highest connection usage: 100% (152/151)
Those disagree; perhaps you restarted?
( open_files_limit ) = 1,024 -- ulimit -n
-- To allow more files, change ulimit or /etc/security/limits.conf or in sysctl.conf (kern.maxfiles & kern.maxfilesperproc) or something else (OS dependent)
-- If you get more tables, raising this will be important.
( Opened_tables ) = 43/hour
[!!] Table cache hit rate: 6% (400 open / 5K opened)
These disagree; perhaps there is no problem here.
[!!] Key buffer used: 18.3% (3M used / 16M cache)
[!!] Read Key buffer hit rate: 91.7% (743 cached / 62 reads)
Not a problem since there is virtually no MyISAM usage and key_buffer_size is only a tiny percentage of RAM.
19 issues flagged, out of 133 computed Variables/Status/Expressions
My take on Tuner's recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance -- waste of time!
Enable the slow query log to troubleshoot bad queries -- yes
Reduce or eliminate persistent connections to reduce connection usage -- ok
Adjust your join queries to always utilize indexes -- yes (use slowlog to find them)
When making adjustments, make tmp_table_size/max_heap_table_size equal -- ok
Reduce your SELECT DISTINCT queries which have no LIMIT clause -- yawn
Increase table_open_cache gradually to avoid file descriptor limits --
ok, but may not be critical
Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
Beware that open_files_limit (1024) variable -- noted
should be greater than table_open_cache ( 400) -- it is
max_connections (> 151) -- No; figure out why there are so many connections
wait_timeout ( 16M) -- probably not important
join_buffer_size (> 128.0K, or always use indexes with joins) -- might help, might not
tmp_table_size (> 16M) -- first look for other ways to improve queries
max_heap_table_size (> 16M)
table_open_cache (> 400)
innodb_buffer_pool_size (>= 872M) if possible. -- yes
Best Answer
Anyway, I fixed the problem via
where
192.168.0.15
is the host machine IP.Then connected via
Looks like working.
Still concerned some features face problems.