I have a decently big MySQL DB (InnoDB) with size about 400 MB. Thus dumping the whole DB with mysqldump and restoring it back takes forever (more than 15 minutes, which is unacceptable).
For debugging purposes I need to run my code against a certain "version" of the DB and then revert all changes back (if any) and try again (possibly multiple times). In order to reproduce a bug, I need to always start from the same version of DB.
Is there any way to manually instruct the DB to create a point-in-time snapshot and then restore it back on demand? This should ideally take just a couple of seconds (I imagine some copy-on-write method would be used).
I'm looking for a solution that is independent from the end application. Imagine a web application that you need to run your robot tests on (which includes multiple request/response cycles). Moreover, if we treat the application as possibly buggy we can't trust it to reliably manage transaction rollbacks either.
I just tried to backup /var/lib/mysql
with rsync
. Cannot even start MySQL after restore, probably there's some file left somewhere that breaks the integrity. Anyway, I was looking for a more higher level solution, resorting to FS-level backups only as a last hope.
Best Answer
Copying the data directory, as you mentioned, is one way to accomplish what you need. Here are the steps, and if you face any error, please post it.
FLUSH TABLES WITH READ LOCK
[Important: Keep the terminal open]cp
, orrsync
UNLOCK TABLES
chown mysql:mysql dir_name
(I think it ischown -r
, so it runs recursively)So far, you have multiple copies of the data at the point of time that you need.
In order to "restore" a copy, you can do one of the following:
datadir
in my.cfg (or my.ini) to a location of one of the data copies, and restart the server; OR