Surprisingly, that's not gibberish.
That indeed appears at the top of binlogs whenever you do mysqlbinlog to a binary log generated using MySQL 5.1 and MySQL 5.5. You will not see that gibberish in binary logs for MySQL 5.0 and back.
This is why the start point for replication from an empty binary log is
- 107 for MySQL 5.5
- 106 for MySQL 5.1
- 98 for MySQL 5.0 and back
This is good to remember if you do MySQL Replication where the Master if MySQL 5.1 and the slave is MySQL 5.0. This could present a really big headache.
Replication from Master using 5.0 and Slave using 5.1 works fine, not the other way around.(According to MySQL Documentation, it is generally not supported for 3 reasons: 1) Binary Log Format, 2) Row-based Replication, 3) SQL Incompatibility).
Anyway, do a mysqlbinlog on the offending binary log on the master. If the resulting dump produces gibberish in the middle of the dump (which I have seen a couple of times in my DBA career) you may have to skip to position 98 (MySQL 5.0) or 106 (MySQL 5.1) or 107 (MySQL 5.5) of the master's next binary log and start replicating from there (SOB :( you may need to use MAATKIT tools mk-table-checksum and mk-table-sync to reload master changes not on the slave [if you want to be a hero]; even worse, mysqldump the master and reload the slave and start replication totally over [if you don't want to be a hero])
If the mysqlbinlog of the master is completely readable after the top gibberish you saw, it is possible the master's binary log is fine but the relay log on the slave is corrupt (due to transmission/CRC errors). If that's the case, just reload the relay logs by issuing the CHANGE MASTER TO command as follows:
STOP SLAVE;
CHANGE MASTER TO
MASTER_HOST='< master-host ip or DNS >',
MASTER_PORT=3306,
MASTER_USER='< usernmae >',
MASTER_PASSWORD='< password >',
MASTER_LOG_FILE='< MMMM >',
MASTER_LOG_POS=< PPPP >;
START SLAVE;
Where
- MMMM is the last file used from the Master that was last processed on the Slave
- PPPP is the last position used from the Master that was last processed on the Slave
You can get MMMM and PPPP by doing SHOW SLAVE STATUS\G
and using
- Relay_Master_Log_File for MMMM
- Exec_Master_Log_Pos for PPPP
Try it out and let me know !!!
BTW running CHANGE MASTER TO command erases the slave's current relay logs and starts fresh.
...even surpassing it's theorically maximum possible allocation.
[OK] Maximum possible memory usage: 7.3G (46% of installed RAM)
There is not actually a way to calculate maximum possible memory usage for MySQL, because there is no cap on the memory it can request from the system.
The calculation done by mysqltuner.pl is only an estimate, based on a formula that doesn't take into account all possible variables, because if all possible variables were taken into account, the answer would always be "infinite." It's unfortunate that it's labeled this way.
Here is my theory on what's contributing to your excessive memory usage:
thread_cache_size = 128
Given that your max_connections
is set to 200, the value of 128 for thread_cache_size
seems far too high. Here's what makes me think this might be contributing to your problem:
When a thread is no longer needed, the memory allocated to it is released and returned to the system unless the thread goes back into the thread cache. In that case, the memory remains allocated.
http://dev.mysql.com/doc/refman/5.6/en/memory-use.html
If your workload causes even an occasional client thread to require a large amount of memory, those threads may be holding onto that memory, then going back to the pool and sitting around, continuing to hold on to memory they don't technically "need" any more, on the premise that holding on to the memory is less costly than releasing it if you're likely to need it again.
I think it's worth a try to do the following, after first making a note of how much memory MySQL is using at the moment.
Note how many threads are currently cached:
mysql> show status like 'Threads_cached';
+----------------+-------+
| Variable_name | Value |
+----------------+-------+
| Threads_cached | 9 |
+----------------+-------+
1 row in set (0.00 sec)
Next, disable the thread cache.
mysql> SET GLOBAL thread_cache_size = 0;
This disables the thread cache, but the cached threads will stay in the pool until they're used one more time. Disconnect from the server, then reconnect and repeat.
mysql> show status like 'Threads_cached';
Continue disconnecting, reconnecting, and checking until the counter reaches 0.
Then, see how much memory MySQL is holding.
You may see a decrease, possibly significant, and then again you may not. I tested this on one of my systems, which had 9 threads in the cache. Once those threads had all been cleared out of the cache, the total memory held by MySQL did decrease... not by much, but it does illustrate that threads in the cache do release at least some memory when they are destroyed.
If you see a significant decrease, you may have found your problem. If you don't, then there's one more thing that needs to happen, and how quickly it can happen depends on your environment.
If the theory holds that the other threads -- the ones currently servicing active client connections -- have significant memory allocated to them, either because of recent work in their current client session or because of work requiring a lot of memory that was done by another connection prior to them languishing in the pool, then you won't see all of the potential reduction in memory consumption until those threads are allowed to die and be destroyed. Presumably your application doesn't hold them forever, but how long it will take to know for sure whether there's a difference will depend on whether you have the option of cycling your application (dropping and reconnecting the client threads) or if you'll have to just wait for them to be dropped and reconnected over time on their own.
But... it seems like a worthwhile test. You should not see a substantial performance penalty by setting thread_cache_size
to 0. Fortunately, thread_cache_size
is a dynamic variable, so you can freely change it with the server running.
Best Answer
The table locks on
MyISAM
can be a killer and migrating toInnoDB
is probably one of the best things you can do to continue to improve scalability. Of course, your change toinnodb_buffer_pool_size
won't impact tables that aren'tInnoDB
.One problem, however, is that the version of InnoDB in MySQL 5.0 is still quite primitive compared to later releases, especially when it comes to multicore machines and internal scaling. This is discussed at length in High-Performance MySQL, which was last updated a little over a year ago but is still quite useful even though MySQL 5.6 was discussed in a mix of present- and future-tense due to its release status at the time. I don't have any affiliation with the book or its authors, I just think it's a great reference. If you don't have it, I'd recommend it, because it goes into a lot of detail about what to do, what not to do, and why.
But, if the system under consideration were my system, I'd be planning to upgrade to MySQL 5.5, at a minimum, and possibly MySQL 5.6, because of the significant improvements in the internals of InnoDB as well as elsewhere.
Looking at your config, the query cache is always something to consider when you're looking at performance issues. It could be that a larger cache would help (perhaps 128M to 256M) but it's also possible that a smaller or disabled query cache might be beneficial, since it does represent a global choke-point that every
SELECT
query has to pass through. The appropriate setting is almost entirely workload-specific.Nothing jumps out at me in your configuration as being particularly sub-optimal, but I would add that if you've been tempted to use any of the tuning "scripts" you find online... try to resist that temptation. "Tune" only what you have a specific reason to tune, and only one parameter at a time. After you successfully upgrade, try to remove as much of the customized values as possible (except
innodb_buffer_pool_size
) and let the behavior of the new version with its default values dictate what needs to be tweaked.The officially recommended path when upgrading across versions is always to do a full mysqldump from the old, and restore on the new installation, though it is possible to do a "binary" upgrade where you simply start the new version code against the old version's
datadir
and domysql_upgrade
. The official path would be to go from 5.0 to 5.1, and then from 5.1 to 5.5.http://dev.mysql.com/doc/refman/5.1/en/upgrading-from-previous-series.html http://dev.mysql.com/doc/refman/5.5/en/upgrading-from-previous-series.html
There's a lot to digest but the bottom line is that 5.0 is at end-of-life and hasn't had so much as a bugfix release in over a year... and the newer versions represent a substantially-improved product that should not require major changes in your application.