Observations:
Version: 5.5.41-0ubuntu0.14.04.1
64 GB of RAM
Uptime = 3d 03:40:44
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues
Increase innodb_buffer_pool_size
to at least 1G (not more than 45G).
Sounds like most of your 64GB is unused? Or do you expect a huge growth in data?
Turn off the Query Cache; it does not seem to be useful:
query_cache_size = 0
query_cache_type = OFF
Set long_query_time = 2
and turn on the slowlog.
After a day, run pt-query-digest
to find the worst queries.
Then work on optimizing them. (Composite indexes, reformulating queries, etc. Ask for help if needed.)
Supporting details and other suggestions
( innodb_buffer_pool_size / _ram ) = 128M / 65536M = 0.20% -- % of RAM used for InnoDB buffer_pool
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) / _ram ) = (16M / 0.20 + 128M / 0.70) / 65536M = 0.40% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
[!!] InnoDB buffer pool / data size: 128.0M/873.0M
So innodb_buffer_pool_size should be at least 1G.
[!!] InnoDB Write buffer efficiency: 0.00% (0 hits/ 1 total)
0 out of 1 -- bogus to mark it "!!"
( Innodb_pages_written / Innodb_buffer_pool_write_requests ) = 2,068,247 / 5340758 = 38.7% -- Write requests that had to hit disk
-- Check innodb_buffer_pool_size
( innodb_file_per_table ) = OFF -- Put each file in its own tablespace
-- (Mildly recommended, especially for large tables)
( Qcache_hits / Qcache_inserts ) = 1,765,289 / 2,509,742 = 0.703 -- Hit to insert ratio -- high is good
-- Consider turning off the query cache.
( (query_cache_size - Qcache_free_memory) / Qcache_queries_in_cache / query_alloc_block_size ) = (16M - 14960424) / 1477 / 8192 = 0.15 -- query_alloc_block_size vs formula
-- Adjust query_alloc_block_size
[!!] Query cache prunes per day: 15154
( Created_tmp_disk_tables / (Created_tmp_disk_tables + Created_tmp_tables) ) = 3,090 / (3090 + 4930) = 38.5% -- Percent of temp tables that spilled to disk
-- maybe increase tmp_table_size and max_heap_table_size; avoid blobs, etc.
( Select_scan ) = 459,980 / 272444 = 1.7 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)
( Select_scan / Com_select ) = 459,980 / 2518409 = 18.3% -- % of selects doing full table scan. (May be fooled by Stored Routines.)
-- Add indexes / optimize queries
( binlog_format ) = STATEMENT -- STATEMENT/ROW/MIXED. ROW is preferred; it may become the default.
( log_slow_queries ) = OFF -- Whether to log slow queries. (Before 5.1.29, 5.6.1)
( slow_query_log ) = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10.000000 = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( Connections ) = 2,305,560 / 272,444 = 8.5 /sec -- Connections
-- Increase wait_timeout; use pooling?
( Max_used_connections ) = 66 -- How many simultaneous connections you had (highwater mark).
[!!] Highest connection usage: 100% (152/151)
Those disagree; perhaps you restarted?
( open_files_limit ) = 1,024 -- ulimit -n
-- To allow more files, change ulimit or /etc/security/limits.conf or in sysctl.conf (kern.maxfiles & kern.maxfilesperproc) or something else (OS dependent)
-- If you get more tables, raising this will be important.
( Opened_tables ) = 43/hour
[!!] Table cache hit rate: 6% (400 open / 5K opened)
These disagree; perhaps there is no problem here.
[!!] Key buffer used: 18.3% (3M used / 16M cache)
[!!] Read Key buffer hit rate: 91.7% (743 cached / 62 reads)
Not a problem since there is virtually no MyISAM usage and key_buffer_size is only a tiny percentage of RAM.
19 issues flagged, out of 133 computed Variables/Status/Expressions
My take on Tuner's recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance -- waste of time!
Enable the slow query log to troubleshoot bad queries -- yes
Reduce or eliminate persistent connections to reduce connection usage -- ok
Adjust your join queries to always utilize indexes -- yes (use slowlog to find them)
When making adjustments, make tmp_table_size/max_heap_table_size equal -- ok
Reduce your SELECT DISTINCT queries which have no LIMIT clause -- yawn
Increase table_open_cache gradually to avoid file descriptor limits --
ok, but may not be critical
Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
Beware that open_files_limit (1024) variable -- noted
should be greater than table_open_cache ( 400) -- it is
max_connections (> 151) -- No; figure out why there are so many connections
wait_timeout ( 16M) -- probably not important
join_buffer_size (> 128.0K, or always use indexes with joins) -- might help, might not
tmp_table_size (> 16M) -- first look for other ways to improve queries
max_heap_table_size (> 16M)
table_open_cache (> 400)
innodb_buffer_pool_size (>= 872M) if possible. -- yes
Best Answer
I can see 3 different reasons why MariaDB might reset this variable. In order of most to least likely explanation:
table_open_cache
variable is being set to 4895.table_open_cache
. (Unlikely!)Additionally, it's possible that another software (such as cPanel) might change the variable dynamically after MariaDB has started.
To see which options the MariaDB server will get from the option files, run:
To find all the places where the MariaDB server is looking for .cnf files, run:
You can then
grep
each of these files to see if you find any instances oftable_open_cache
ortable-open-cache
.For reference, you should be able to see the current value of the variable with:
And you may also want to consult this page in the MariaDB Knowledge Base: Optimizing table_open_cache.