By default, your usage of mysqldump
will lock each table as it dumps it and there will not be any consistency among tables. You will also not get any triggers, stored procedures, stored functions, or events backed up, and if any of your view definitions are invalid (referencing nonexistent tables or columns that you may have dropped or renamed since the view was defined), the dump process will terminate when that error is encountered.
If all of your tables are InnoDB, you can get a consistent snapshot by adding this:
--single-transaction
This option disables the default --lock-tables
which locks each table as it is dumped, then unlocks it when done dumping it, and instead issues the following statements to the server at the beginning of the dump (which you can observe by enabling the general query log):
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START TRANSACTION WITH CONSISTENT SNAPSHOT;
Of course, this only helps with InnoDB. If your tables are MyISAM or any other non-transactional engine, you only have one option:
--lock-all-tables
...which locks all of the tables for the entire dump, since that's the only way to ensure consistency without transactions to assist.
To back up your stored procedures, events, and triggers, add:
--routines --events --triggers
To prevent mysqldump
from failing when there's an invalid view definition
--force
To capture your binlog position and store it in the backup file, add:
--master-data=2
This option doesn't work if your machine isn't a slave.
Your question was about backing up the database "with rsnapshot" although to put a fine point on it, you're proposing backing up the database with mysqldump and then backing up the dumpfile with rsnapshot... It's hard to tell from a brief look at their site whether that's going to be a good choice, because rsnapshot appears to have file-level granularity. It doesn't appear to store diffs, but instead stores full files for "each" file that changes between backup cycles:
"The amount of space required is roughly the size of one full backup, plus a copy of each additional file that is changed."
-- http://www.rsnapshot.org/rsnapshot.html
I welcome correction if I'm wrong on this point. In this case, you only have 1 file, your dump file from mysqldump, which will of course change every time.
Whatever you do, don't entertain the thought of trying to back up the actual files (.frm, .ibd, .MYD, etc.) underlying your tables in MySQL. That does not work with the server running even though under rare conditions you might get a backup that seems to be intact. It isn't.
Update:
There's an important distinction between what mylvmbackup is doing and "backing up the actual files." To use mylvmbackup, you have to be using LVM, which can be visualized as a layer of abstraction between your filesystem and your hard drive. With LVM, you have the ability to freeze the file system and make a point-in-time snapshot of the entire filesystem.
It looks like mylvmbackup is doing a FLUSH TABLES
followed by a FLUSH TABLES WITH READ LOCK
prior to looking up the replication coordinates (if specified in configuration), which is the same process mysqldump does when --single-transaction
and --master-data
are both used. This stablizes MySQL but it does not fully quiesce InnoDB, so when a backup like this is restored, InnoDB will think it crashed and will do crash recovery... but it should be a clean recovery.
If your web site's data files were in the same filesystem, then an LVM snapshot would contain everything consistent to a single point in time, but whether having your other web site files in the same filesystem as the database is good practice (if, indeed running your web server on the same server is good practice) is another discussion.
Whatever the approach, it's vitally important in any backup strategy to periodically verify that your backups can actually be restored (on a different machine). The truly safest method would be to be sure innodb_fast_shutdown
= 0 and actually shut down MySQL but that's probably not practical.
Of course, if you want with something like rsnapshot, you could always snapshot the web site files, then snapshot the database, then snapshot the web site files again. If nothing changes in the site files while the database is being backed up, rsnapshot will use very little space for that second run. If things do change, it should be fairly trivial to understand the differences if you needed to recover.
There is no reliable way of interrupting FLUSH TABLES WITH READ LOCK
. I disagree with the previous answer. You can try KILLing the FLUSH
query all you want. Typically this will just hang till the original command completes.
However, some good news for you. You should be aware that FLUSH TABLES WITH READ LOCK
is not strictly necessary in order to take LVM snapshot.
It is required if:
- You are interested in point in time recovery (applying incremental restore via binlogs)
- You have lots of non-transactional (ie MyISAM) tables being written to
- You are running the snapshot against the master and want to create a slave based on the snapshot.
If your database is InnoDB oriented, and nothing interesting is hapenning with your MyISAM system tables (ie no one creating a procedure, GRANTing privileges etc.) then it's enough that you FLUSH
only those tables.
All things InnoDB, you can just take the snapshot. There is no benefit in FLUSH
ing. InnoDB will make proper recovery anyhow.
The need for FLUSH TABLE
comes from the requirement of getting the binary log file and position when you want to be able to rebuild a slave or otherwise use binary logs.
Best Answer
Here is how I would do it (Similar to your suggestion)
FLUSH TABLES WITH READ LOCK
It is important to keep this session/terminal open till the end of the process. Wait till you get the message the tables are now locked.UNLOCK TABLES
It is essential not to close terminal 1 before you are done with step #2.
Notice please that there is no need to stop the slave.
This link would be very helpful. It explains with more details, using LVM.
HTH