If the tablespace falls below either of these levels a warning or critical is raised. Change these limits to whatever values (in bytes) you need. If you wish to completely disable this monitoring set the values to 0:
[mysql_innodb]
env.warning 0
env.critical 0
Restart munin:
/etc/init.d/munin-node restart
su munin -c /usr/bin/munin-cron
I'm not looking for alternative ways to do the updates. Having things
like a tmp table [will] lock all the rows that are being updated until
they all finish (which could be hours), which won't work for me. They
MUST be in these awful loops.
I disagree.
The strength of an RDBMS is in performing set operations like "update all these rows plz". Given this, your intuition should tell you that these "awful loops" are not the best way to go except under very rare circumstances.
Let's take a look at your current update logic and understand what it's doing.
First off, the set autocommit=0 line in your script is unnecessary. Because you explicitly open a transaction immediately after that with start transaction, autocommitautomatically becomes disabled until you end the transaction with COMMIT or ROLLBACK.
Now for the meat of the logic: You've wrapped all these individual updates inside the loop in one big transaction. If your intention behind the iterative updates was to reduce locking and increase concurrency, the wrapped transaction defeats that intention. MySQL must maintain locks on every row it updates until the transaction commits so it can roll them all back at once if the transaction fails or is cancelled. Furthermore, instead of knowing in advance that it is about to lock this range of rows (which would enable MySQL to issue locks with the appropriate granularity) the engine is forced to issue a large number of row-level locks in rapid-fire. Given that you are updating 1 million rows, this is a massive burden on the engine.
I propose two solutions:
Turn autocommit on and remove the transaction wrapper. MySQL will then be able to release every row lock right after it finishes updating the row. It is still forced to issue and release a massive number of locks in a short period of time, so I doubt this will be an appropriate fix for you. Furthermore, if some error occurs halfway through the loop, nothing will be rolled back since the work is not transaction-bound.
Batch your updates in a temp table. You mentioned and then dismissed this solution, but I bet it will perform best. Have you already tried it? I would first test the full million-row update. If that takes too long then batch the work into progressively smaller chunks until you've found the sweet spot: the batches are big enough to get the total work done quickly, but no individual batch blocks other processes for too long. This is a common technique DBAs use when they have to modify a large number of rows during live operations. Remember, since your goal is to maximize your concurrency, keep autocommit on and don't wrap any of this work into a massive transaction so MySQL releases its locks as soon as possible.
Notice that as the batches become progressively smaller, this solution eventually approximates the first one. That is why I am confident this solution will perform better: When the database engine can group its work into chunks, it flies.
Since the InnoDB Storage Engine looks operational at the time mysql is restated, you should take the time to cleanup the InnoDB infrastructure
mysqldump the entire database instance
reconfigure innodb from scratch
Start over with a fresh ibdata1 file
Empty Undo Space
Empty Rollback Segments
Empty Data Dictionary
Empty List of Tablespace IDs
Store Each InnoDB table in its own tablespace
Configure Buffer Pool to be 4 times the Log File Size
Perform Better Flushing of the Buffer Pool
reload mysql data
Add this to my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
#innodb_force_recovery = 4
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
innodb_log_file_size=256M
innodb_file_per_table
innodb_buffer_pool_size=1G
innodb_flush_method=O_DIRECT
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Run these commands to install mysql in a new folder
service mysql restart --skip-networking --skip-grant-tables
mysqldump --all-databases --routines --triggers > /root/MySQLData.sql
service mysql stop
mv /var/lib/mysql /var/lib/mysql_old
mkdir /var/lib/mysql
mkdir /var/lib/mysql/mysql
cp /var/lib/mysql_old/mysql/* /var/lib/mysql/mysql/.
chown -R mysql:mysql /var/lib/mysql/mysql/.
service mysql start --skip-networking --skip-grant-tables
mysql < /root/MySQLData.sql
service mysql restart
Best Answer
I wouldn't think this is why your site is very slow. You can change the alert levels by editing the following file:
In this file you should see:
If the tablespace falls below either of these levels a warning or critical is raised. Change these limits to whatever values (in bytes) you need. If you wish to completely disable this monitoring set the values to 0:
Restart munin: