You could copy just the the "mysql" database away to another location and start another daemon on it. Get the SHA1 or DES hash stored in the user table for a user with SUPER privs (usually root, but sometimes renamed for security through obscurity).
Then connect to the mysql using a modified version of the client library that makes mysql_real_connect() support using a pre-hashed password instead of having it take the password plaintext. This should be trivial.
You won't ever know the actual password, but with the hash and a modified client you'll be able to log in anyway.
You can then make any modifications to permissions, create necessary schema and tables and flush privileges.
I'll leave the security implications of such practices up to you.
Promoting a slave would probably be my preferred route. As you pointing out any selects on MyISAM tables would require table level locks. There is one tool that might be able to help, pt-table-sync. It's primary purpose is to find gaps and differences in existing master slave relationships.
A nice thing about it is it does this in nice "chunks". Think of it kind of like antilock breaks. The chunk size is configurable but you could, for example go through doing 1000 rows at a time, minimizing lock times and letting things flow through. I haven't used it to fully repopulate a slave from scratch though although I'd give that a look. Once you have a full copy, do another run to catch new stuff, updates that have come in. Do a flush table with read locks, do a final table sync. run show master status to get the binary log position to start slaving from, unlock tables.
Oh, if you don't have binary logging you'll at least need a master bounce and cnf change unfortunately.
Another approach that might be much simpler depending your write downtime tolerance. You're all myisam, you gave the size in rows by what's the disk footprint in MB or GB? Figure out how long it would take to transfer that size between your machines (hopefully there both in the same local network). You could do flush table with readlock, again still show master status, then just rsync the .MY* files over to your new DB's datadir.
One final alternative, depending on how you're setup: Can you do an LVM or other kind of filesystem snapshot? This would be the best way to minimize downtime. You flush tables with readlock, show master status, start snapshot, then unlock the tables to allow full read write activity to flow through. The difference here is you'll copy the snapshot you started. You'll just need to feel confident the write activity won't exceed the snapshot size you allocate before the copy finishes.
What ever method, before promotion I would verify the character set conversions went as desired. They can be a real pain to reverse.
I would also recommend upgrading to innodb if possible to make it possible to use xtrabackup in a non blocking fashion in the future.
Best Answer
Realistically, there is no online method for table repait.
There are two techniques to repair
mydb.mytable
TECHNIQUE #1 : Repair Online
This will perform the table repair with mysql runnng. This will perform a full table lock so no one can access the table.
TECHNIQUE #2 : Repair Offline
To repair a table offline, move the files making up the table to another folder and perform the repair there. For example, to repair mydb.mytable using the folder
/var/lib/mysql
If
-r
does not work, rerun these lines usingREPAIR_OPTION="-o"
EPILOGUE
Neither of these techniques will allow
REPAIR TABLE
operations while the file is live.