This was a pretty rare situation. I hope to never end up there again, with an InnoDB "log sequence number is in the future!" error. Because of my particular details, rebuilding/restoring my server's data was a last resort. Some cheats in helping that were good ideas, but in the end, I decided to just keep improving my Perl script to play this silly game and churn through as many gigs/hour as I could. What the heck, it's a good system stress test.
Remember: the goal is to increase a single counter ("log sequence number") which is stored somewhere in the headers of ib_logfile0 and ib_logfile1. This is to fake out InnoDB so it will ignore an apparent time warp and get on with life. But no one knows how to edit that number. Or if they do know, no one's talking.
Here's my final product. YMMV, but using mysql's REPEAT function to generate the data internally is highly efficient.
#!/usr/bin/perl
use DBI;
$table = shift || die;
$dbh = DBI->connect("DBI:mysql:junk:host=localhost", "user", "pass"); #Edit "junk" (DB name), user, and pass to suit.
$dbh->do("DROP TABLE IF EXISTS $table");
$dbh->do("CREATE TABLE $table (str TEXT) ENGINE=INNODB");
$sth = $dbh->prepare("INSERT INTO $table (str) VALUES (REPEAT(?,1000000))");
foreach (1..50) {
$sth->execute('0123456789'); # 10 MB
}
$dbh->do("DELETE FROM $table");
My suggested recipe:
- Create a 'junk' database
- Save the above perl script as junk.pl.
- Run junk.pl data1, and junk.pl data2, and junk.pl data3, etc. all at once, for as many CPU cores as your database server has, to start. Open multiple shells and wrap each run in a Bash loop:
while true; do date; junk.pl dataX; done
.
Watch your LSN grow, perhaps in another loop:
silly# echo "SHOW INNODB STATUS \G" | mysql -p'xxxxxx' | grep '^Log seq'
Log sequence number 124 3871092821
silly# echo "SHOW INNODB STATUS \G" | mysql -p'xxxxxx' | grep '^Log seq'
Log sequence number 124 4209892586
silly# echo "SHOW INNODB STATUS \G" | mysql -p'xxxxxx' | grep '^Log seq'
Log sequence number 125 85212387
The big number is an unsigned 32-bit INT which will wrap at 4GB, increasing the smaller number every time. In this case above it just rolled from 124 to 125. Your goal is hidden in the mysqld.log that sent you Googling for this ridiculous solution in the first place. Once you cross that finish line, that's it! Blow the horns! Release the confetti!
Sidebar: This uncovered an interesting bug in mysqld 5.0 w/REPEAT: if you go to 20 MB, it flips some internal counter and rolls over to ~ 96 KB. No warning or error anywhere. I wasn't about to waste time tracking that down. 10 MB works great. If you do hit some other limit, it may complain. I have various innodb buffers increased from default. Season to taste. As always, watch mysqld.log in one window.
MyISAM
Doing OPTIMIZE TABLE mydb.mytable;
performs two basic operations
ALTER TABLE mydb.mytable ENGINE=MyISAM;
ANALYZE TABLE mydb.mytable;
This definitely cleans up table fragmentation and computes fresh statistics for indexes
InnoDB
Doing OPTIMIZE TABLE mydb.mytable;
performs two basic operations
ALTER TABLE mydb.mytable ENGINE=InnoDB;
ANALYZE TABLE mydb.mytable;
While this eliminates fragmentation, ANAYLZE TABLE
is complete useless for InnoDB. I wrote about this a long time ago:
If your data growth rate is very small (or in other words, your dataset stays basically the same size for months at a time), then defragmenting tables would just be overkill. You should probably focus on tuning InnoDB's other aspects (See my Aug 04, 2011
post: Optimizing InnoDB default settings)
I also have posts in the Drupal StackExchange on opimizing MySQL
Best Answer
The only correct solution is to reinstall and restore from a backup. You say you don't have a backup, but the point here is that the idea behind forcing InnoDB recovery is that you can potentially get the server stable enough that you can make a backup. Then use it to restore on a fresh setup.
Obviously, make a copy of all the files before doing anything.
Unless there's a memo I didn't get, you don't want to try increasing the log sequence number to pacify InnoDB, because even if you manage to do it, you've still got a server with data corruption in unknown places -- fix the LSN and all you've really accomplished is made InnoDB unaware of the corruption. Forcing the LSN is pretty much a fool's errand.
Have you tried anything from...
http://dev.mysql.com/doc/refman/5.0/en/forcing-innodb-recovery.html
The different levels, 1 through 6, cumulatively disable parts of InnoDB with the idea that if you disable enough functionality you might be able to get a successful
mysqldump
or at leastSELECT ... INTO OUTFILE
from the tables.The idea is to use the lowest number that allows you to successfully dump your data, but if this error is occurring at startup like I think it may be, you might just want to go ahead and start with 6. If the server starts up, then try selecting from tables. It may crash again and if it does, then write that table off for the moment and move to the next table.
Once you've retrieved the data from the tables that aren't corrupt, you may be able to go back to the ones that don't work, and try to retrieve the data in small chucks with
SELECT ... LIMIT ... OFFSET
.Take a look at my answer to this similar question for some additional ideas.
You may also want to look at Percona's InnoDB data recovery tool as a potential resource, though I've never (fingers crossed) been caught without a backup or a running replication slave, so I've never needed it.