Surprisingly, that's not gibberish.
That indeed appears at the top of binlogs whenever you do mysqlbinlog to a binary log generated using MySQL 5.1 and MySQL 5.5. You will not see that gibberish in binary logs for MySQL 5.0 and back.
This is why the start point for replication from an empty binary log is
- 107 for MySQL 5.5
- 106 for MySQL 5.1
- 98 for MySQL 5.0 and back
This is good to remember if you do MySQL Replication where the Master if MySQL 5.1 and the slave is MySQL 5.0. This could present a really big headache.
Replication from Master using 5.0 and Slave using 5.1 works fine, not the other way around.(According to MySQL Documentation, it is generally not supported for 3 reasons: 1) Binary Log Format, 2) Row-based Replication, 3) SQL Incompatibility).
Anyway, do a mysqlbinlog on the offending binary log on the master. If the resulting dump produces gibberish in the middle of the dump (which I have seen a couple of times in my DBA career) you may have to skip to position 98 (MySQL 5.0) or 106 (MySQL 5.1) or 107 (MySQL 5.5) of the master's next binary log and start replicating from there (SOB :( you may need to use MAATKIT tools mk-table-checksum and mk-table-sync to reload master changes not on the slave [if you want to be a hero]; even worse, mysqldump the master and reload the slave and start replication totally over [if you don't want to be a hero])
If the mysqlbinlog of the master is completely readable after the top gibberish you saw, it is possible the master's binary log is fine but the relay log on the slave is corrupt (due to transmission/CRC errors). If that's the case, just reload the relay logs by issuing the CHANGE MASTER TO command as follows:
STOP SLAVE;
CHANGE MASTER TO
MASTER_HOST='< master-host ip or DNS >',
MASTER_PORT=3306,
MASTER_USER='< usernmae >',
MASTER_PASSWORD='< password >',
MASTER_LOG_FILE='< MMMM >',
MASTER_LOG_POS=< PPPP >;
START SLAVE;
Where
- MMMM is the last file used from the Master that was last processed on the Slave
- PPPP is the last position used from the Master that was last processed on the Slave
You can get MMMM and PPPP by doing SHOW SLAVE STATUS\G
and using
- Relay_Master_Log_File for MMMM
- Exec_Master_Log_Pos for PPPP
Try it out and let me know !!!
BTW running CHANGE MASTER TO command erases the slave's current relay logs and starts fresh.
ASPECT #1
The first thing that caught my eye was this line
InnoDB: Error: trying to load index PRIMARY for table /
This indicates you have a table using the InnoDB Storage Engine
What is interesting about InnoDB is the way a PRIMARY KEY is stored. It is stored in a structure called the gen_clust_index, or more commonly known as the Clustered Index.
My immediate guess is that a certain PRIMARY KEY entry is too big
Please consider some articles on the good, the bad, and the ugly of using long PRIMARY KEYs:
then see if the <DB Hidden>.<Table Hidden>
needs to be redesigned.
ASPECT #2
In terms of your conjecture concerning a parallel truncate table, that sounds kind of dangerous. Why? InnoDB performs TRUNCATE TABLE as DDL
not DML
. I have written about this before:
ASPECT #3
Some tuning suggestions
Please add the following to my.ini
[mysqld]
max_allowed_packet=1G
innodb_fast_shutdown=0
Start mysql
In another session, run tail -f <errorlogfile>
and watch InnoDB Crash Recovery.
If mysql is fully started back up and InnoDB crash recovery has completed, try to shut mysql down immediately. You may need to resize your InnoDB Transaction Logs.
Sorry for these wild suggestions, but I am flying blind here.
Please post the following in the question:
- your entire
my.cnf
- how much RAM is on board
UPDATE 2012-12-05 12:09 EDT
Please do the following:
STEP 01) Add these changes to my.cnf
[mysqld]
max_allowed_packet=1G
innodb_fast_shutdown=0
innodb_thread_concurrency=0
STEP 02) service mysql restart
to make sure mysql comes up
STEP 03) You need to resize ib_logfile0 and ib_logfile1 (24M might be too small)
service mysql stop
cd /var/lib/mysql
mv ib_logfile0 ib_logfile0.bak
mv ib_logfile1 ib_logfile1.bak
STEP 04) Add these changes to my.cnf
[mysqld]
innodb_log_file_size=512M
innodb_log_buffer_size=8M
STEP 05) service mysql start
mysqld will recreate ib_logfile0 and ib_logfile1 512M each
Now, try and see what happens....
UPDATE 2012-12-05 12:18 EDT
In the meantime, please read my ServerFault post on the mysql packet and its sizing implication with regard to the innodb_log_file_size and innodb_log_buffer_size as I learned from someone else's ServerFault post.
UPDATE 2012-12-05 14:28 EDT
I edited all references to customer tables out of this question.
The root cause was a damaged page in ibdata1
with data and index pages mixed inside. I helped Andrew migrate data out, recreate ibdata1 with innodb_file_per_table, and Andrew reloaded the data.
Best Answer
PERSPECTIVE #1
When you update by Primary Key only in InnoDB, there is a rare but possible occasion when the clustered index (aka gen_clust_index) can get locked.
I once answered three posts from one individual on this subject
Please read through these carefully. The poster of these question found his own workaround based on the seeing InnoDB Clustered Index Locking behavior. Sadly, he did not post what the workaround was.
In addition, when you see the queries running slow, log into mysql and run
SHOW ENGINE INNODB STATUS\G
and starting looking for locks in the Clustered Index.PERSPECTIVE #2
I see you commented out innodb_log_file_size. You have it at 5MB, the default. Since innodb_buffer_pool_size is set to 1G, innodb_log_file_size needs to be at 256M. Click here to go about setting innodb_log_file_size to 256M.
PERSPECTIVE #3
I see you are not using innodb_file_per_table. You may want to use it in order to have table updates done specifically for the one table with a million rows. Click here to see how to Clean Up InnoDB infrastructure to use innodb_file_per_table.