Surprisingly, that's not gibberish.
That indeed appears at the top of binlogs whenever you do mysqlbinlog to a binary log generated using MySQL 5.1 and MySQL 5.5. You will not see that gibberish in binary logs for MySQL 5.0 and back.
This is why the start point for replication from an empty binary log is
- 107 for MySQL 5.5
- 106 for MySQL 5.1
- 98 for MySQL 5.0 and back
This is good to remember if you do MySQL Replication where the Master if MySQL 5.1 and the slave is MySQL 5.0. This could present a really big headache.
Replication from Master using 5.0 and Slave using 5.1 works fine, not the other way around.(According to MySQL Documentation, it is generally not supported for 3 reasons: 1) Binary Log Format, 2) Row-based Replication, 3) SQL Incompatibility).
Anyway, do a mysqlbinlog on the offending binary log on the master. If the resulting dump produces gibberish in the middle of the dump (which I have seen a couple of times in my DBA career) you may have to skip to position 98 (MySQL 5.0) or 106 (MySQL 5.1) or 107 (MySQL 5.5) of the master's next binary log and start replicating from there (SOB :( you may need to use MAATKIT tools mk-table-checksum and mk-table-sync to reload master changes not on the slave [if you want to be a hero]; even worse, mysqldump the master and reload the slave and start replication totally over [if you don't want to be a hero])
If the mysqlbinlog of the master is completely readable after the top gibberish you saw, it is possible the master's binary log is fine but the relay log on the slave is corrupt (due to transmission/CRC errors). If that's the case, just reload the relay logs by issuing the CHANGE MASTER TO command as follows:
STOP SLAVE;
CHANGE MASTER TO
MASTER_HOST='< master-host ip or DNS >',
MASTER_PORT=3306,
MASTER_USER='< usernmae >',
MASTER_PASSWORD='< password >',
MASTER_LOG_FILE='< MMMM >',
MASTER_LOG_POS=< PPPP >;
START SLAVE;
Where
- MMMM is the last file used from the Master that was last processed on the Slave
- PPPP is the last position used from the Master that was last processed on the Slave
You can get MMMM and PPPP by doing SHOW SLAVE STATUS\G
and using
- Relay_Master_Log_File for MMMM
- Exec_Master_Log_Pos for PPPP
Try it out and let me know !!!
BTW running CHANGE MASTER TO command erases the slave's current relay logs and starts fresh.
It is sort of yes and no. Why would I say both?
There are still some transactional data embedded in the old ib_logfiles in conjunction with the ibdata1 file. (See pictorial representation)
What you should have done is this:
mysql -u... -p... -ANe"SET GLOBAL innodb_fast_shutdown = 0"
/etc/init.d/mysql.server stop
mv ib_logfile0 ib_logfile0.OLD
mv ib_logfile0 ib_logfile0.OLD
/etc/init.d/mysql.server start
If you did not know of disabling innodb_fast_shutdown, ther you steps should have been
/etc/init.d/mysql.server stop
/etc/init.d/mysql.server start
/etc/init.d/mysql.server stop
mv ib_logfile0 ib_logfile0.OLD
mv ib_logfile0 ib_logfile0.OLD
/etc/init.d/mysql.server start
Doing either of these coordinate the purging of all uncommitted writes to tables and indexes. Just looking at the error log, I think you dodged a bullet in this instance. In future, please follow this protocol.
If you are not sure either way, you could put back the old log files, set the old size of innodb_log_file_size, restart mysql. Then, start the process over again as I prescribed above.
Best Answer
This is probably not cause for concern.
MySQL writes a new header to the file each time the logs are flushed. Presumably this is just in case you rotated the log file, so the new file will have a header... but it doesn't actually check whether it's a new file or not. The server does not have to restart to write this entry, so it doesn't mean the server is necessarily restarting.
The
rdsadmin
user you see in the processlist appears to be the supervisory connection that Amazon uses to monitor and manage each instance. Something -- presumably that connection -- periodically rotates the log files, most likely with some variant ofFLUSH LOGS;
. It sounds like the flush occurs more often than the rotate, which would exactly explain what you're seeing.This will give you the actual uptime of the instance in seconds. If that value is high, this is just the server writing a new header when the logs are being flushed to disk.