I'm not sure if this is true but I remember reading if you run the following command in linux
mysqldump -u username -p database_name > backup_db.sql
while reads and writes are being made to a database then the dump may contain errors.
Are there particular options in the command mysqldump
to make sure this is done safely on a live system? I'm okay with reads/writes being disabled for our users for a few seconds (The database <50MB)
Best Answer
All Data is InnoDB
This is what will give you an exact point-in-time snapshot of the data:
--single-transaction
produces a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.--routines
dumps all stored procedures and stored functions--triggers
dumps all triggers for each table that has themAll Data is MyISAM or Mix of InnoDB/MyISAM
You will have to impose a global read lock, perform the mysqldump, and release the global lock
Give it a Try !!!
UPDATE 2012-06-22 08:12 EDT
Since you have <50MB of total data I have another option. Instead of launching a SLEEP command into the background to hold the global read lock for 86400 sec (that 24 hr) just to get the process ID and kill outside, let's try setting a 5 second timeout in mysql rather than in the OS:
This is a cleaner and simpler approach for very small databases.