This would depend upon the queries you issue (suppose the large table is mydb.logtable
).
ASPECT #1
For starters, think about the MyISAM storage engine. It only caches index pages in the MyISAM key cache (sized by key_buffer_size). If any queries against mydb.logtable
reads a significant number of index pages from /var/lib/mysql/mydb/logtable.MYI
, the query mya purge out index pages from other tables. This would causes other queries for those other table to have to reread them index pages from disk again.
ASPECT #2
Any queries against a large MyISAM table that is infrequently accessed may have old index statistics, possibly resulting in full table scans down the road. During off hours, you should setup a crontab job that runs this SQL command:
ANALYZE TABLE mydb.logtable;
This will rebuild the index statistics.
ASPECT #3
Regardless of Storage Engine (or even RDBMS), any query that requests more that 5% of a table's total size would cause the MySQL Query optimizer to stop using indexes and do full table scans. For example, if you query data for a year and a table contains less than 20 years of log info, it is very probable that a full table scan will happen.
ASPECT #4
If the cardinality of the indexes is very low, a certain key combination could result in a query not using any indexes and do a full table scan.
For a demonstration of low cardinality affecting query performance and EXPLAIN plain generation, see my Nov 13, 2012
post Must an index cover all selected columns for it to be used for ORDER BY?
SUGGESTION
You should schedule some form of log rotation. Perhaps this:
SET @tb1 = 'mydb.logtable';
SET @tb2 = CONCAT(@tb1,'_',date_format(now(),'%Y%m%d_%H%i%s'));
SELECT CONCAT('ALTER TABLE ',@tb1,' RENAME ',@tb2) INTO @sql_1;
SELECT CONCAT('CREATE TABLE ',@tb1,' LIKE ',@tb2) INTO @sql_2;
PREPARE s FROM @sql_1; EXECUTE s; DEALLOCATE PREPARE s;
PREPARE s FROM @sql_2; EXECUTE s; DEALLOCATE PREPARE s;
This will keep the log file lean and mean and archive the most recent entries.
Your max_connect_errors is way too low. You have it at 1000. That means after 1000 consecutive connect failures, you cannot connect to MySQL any longer.
Your status variable Aborted_connects should be the dead giveaway if it climbs fast.
When you can no longer connect to MySQL even with bunch of open DB Connections, you would have to execute FLUSH HOSTS;
and that resets to count to 0.
I would raise max_connect_errors to 1000000000 (1 billion).
I would also consider lowering the following
[mysqld]
interactive_timeout = 30
wait_timeout = 30
Best Answer
Unused columns do carry a cost, but rarely one that most people will notice or care about in systems with well-written, targeted queries. If you are curious about testing performance and contributing in a meaningful way to the WordPress project, you might want to consider doing something like this:
Make a workable backup of your WordPress database. For example:
Note: This assumes you would create backups via a command-line. If you prefer GUIs, make a backup in your preferred way.
Drop the columns that you think you can live without:
See what breaks over the next few days. If WordPress is unhappy about something, it is not shy about telling the world that something is wrong.
Then, if nothing bad happens...
Begin modifying the WordPress API to handle those fields differently (if at all) and tweak the database to be more performant.
Working on a large project like WordPress can bring a great deal of satisfaction, particularly if any of your modifications are adopted by the core project.