Our Db is about 2GB in size.
We run a scheduled job at 3am every morning, when traffic is pretty low, where we export the whole db and sftp it off site.
Is it bad practice to do this on a production db?
The slow query log always grows with 10 or so queries when doing this. Open/opened tables also grows a bit.
So, is this not a good backup strategy? What's a good/standard alternative to this? Is it even bad for the db somehow?
Thanks.
EDIT
We're on a Linode 4096 where the DB is the only thing run on that server.
It has SSD's but not world class. We're using 2GB of the 4GB memory for the innodb_buffer_pool.
The command is as follows:
mysqldump --opt --u USER -pPASSWORD DATABASE | gzip > /path/to/database.sql.gz
Best Answer
since the database size is large you should make the dump command like the following:
--quick
:This option is useful for dumping large tables. It forcesmysqldump
to retrieve rows for a table from the server a row at a time rather than retrieving the entire row set and buffering it in memory before writing it out--single-transaction
: This option sets the transaction isolation mode to REPEATABLE READ and sends a START TRANSACTIONSQL
statement to the server before dumping data. It is useful only with transactional tables such asInnoDB
, because then it dumps the consistent state of the database at the time when START TRANSACTION was issued without blocking any applications.--lock-tables=false
: option stopsMyISAM
tables (if they exsit) being locked during the backup.