Mysql – Quick MySQL Backup (1 file per table)

backupmyisamMySQL

I asked a relevant question few months back and got answer with wide variety of options for backup.
But this time I have different scenario and I have got a database with 7 tables and 1 of them has exceeded 20GB (table1) and one another(table6) is 13GB while all other tables are well below 500MB.

In current scenario larger table (table1) is going to get even more data so I have a thought to create another table(table1-1) with same schema and insert all new data in that table. Because mostly there are INSERT calls and very few UPDATE calls are made to table1 and all data inserted are based on ID from table2. (I think its not very important to know though)

All tables are MYISAM and table6 is not going to grow in near future with same pace.

Now I want to ask few questions to gurus and few backup solutions.

First of all.. do you think as I have grown beyond 20GB and creating new table with same schema would help me because mostly there are INSERT calls so it will be light on system to insert in new place compared with inserting in a table which is already grown to 20GB+.

And all these tables are in single database so I want to backup whole database but not 1 big file rather it should be 1 file per table because it will reduce overhead of keeping local copy which now contains table1 all the time. Actually if table1-1 will be used then table1 will only be locally copied only once in a week while others are locally copied daily.

Please suggest me finest way to backup keeping in mind the data size and with minimal (only necessaary)downtime.

Best Answer

Last time, I answered your question by suggesting you perform a well-scripted parallel dump of all tables.

Since the tables are lopsided in size, parallel table dumps are not much help for the 20GB and 13GB tables. What can be done? You must try something out of the ordinary. The good thing here is that all the tables are MyISAM.

Have you ever considered running an rsync? How do you do that?

EXAMPLE

You have /var/lib/mysql as your datadir for a remote production database

Perform an rsync on /var/lib/mysql on prod server (IP Address 10.1.2.30) to /backups on a backup server. Just run a script something like this on the backup server:

INIT_TS=0
COMP_TS=0
DIFF_TS=61
RSYNC_THRESHOLD=60
DBSERVER_IP=10.1.2.30
FOLDER_TO_BACKUP=/var/lib/mysql
mkdir /backups/backup_temp

while [ ${DIFF_TS} -gt ${RSYNC_THRESHOLD} ]
do
    INIT_TS=`date +%s`
    rsync -arv user01@${DBSERVER_IP}:${FOLDER_TO_BACKUP}/ /backups/backup_temp
    CURR_TS=`date +%s`
    (( DIFF_TS = CURR_TS - PREV_TS ))
done

MYSQL_CONN="-h${DBSERVER_IP} -uusername -ppassword"
mysql ${MYSQL_CONN} -A -e"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
SEARCHING_FOR_SLEEP=0
while [ ${SEARCHING_FOR_SLEEP} -eq 0 ]
do
    sleep 3
    SEARCHING_FOR_SLEEP=`mysql ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep -c "SELECT SLEEP(86400)"`
done
sleep 3
SLEEP_ID=`mysql ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep "SELECT SLEEP(86400)" | awk '{print $1}'`

rsync -arv user01@${DBSERVER_IP}:/var/lib/mysql/ /backups/backup_temp

BACKUP_TIMESTAMP=`date +"%Y%m%d%H%M%S"`
mv /backups/backup_temp /backups/${BACKUP_TIMESTAMP}

mysql ${MYSQL_CONN} -A -e"KILL ${SLEEP_ID}"

Here is what this basic backup script does

  • Step 01) rsync production's /var/lib/mysql (from remote server) to /backups on local server
  • Step 02) If running time of rsync is greater than 60 seconds, repeat Step 01
  • Step 03) Close all tables and issue global read lock
  • Step 04) Perform final rsync (SELECTs allowed only)
  • Step 05) Release global read lock

Notice the following:

  • the only mysql interaction is
    • issuing the global read lock
    • checking for the process ID that is holding the global read lock
    • releasing the global read lock
  • FOLDER_TO_BACKUP is the datadir. If you want to backup a specific database (say mydb), set FOLDER_TO_BACKUP to /var/lib/mysql/mydb
  • Actual downtime is from the acquisition of the SLEEP_ID to the processing killing of the SLEEP_ID

Give it a Try !!!

UPDATE 2012-02-02 11:47 EDT

The above script secures mysqld in such a way that it takes a disklevel copy of the database while allowing mysqld to perform INSERTs, UPDATEs, and DELETEs during the copy. This is necessary since the MyISAM Storage Engine does not have a natural mechanism for allowing a copying of itself. Even powerful products like MySQL Enterprise Backup and Percona XtraBackup perform full table locking on MyISAM tables just like this. My personal preference is to script it so I can see it, feel it, taste it, and touch it and just know for myself that MyISAM is fully protected.

The advantage of using my script does stems from the fact that I shorten the length of downtime. In the context of this anwser, downtime is really the period where the mysqld process will only allow SELECTs to all MyISAM tables during the final rsync.

An additional benefit is that all MyISAM tables copied will all have the same point-in-time should you wish to restore all MyISAM tables as of the same point-in-time. That point-in-time will be based, not on the start of the backup process, but when the global read lock as acquired.

If you need to copy only specific tables and you can guarantee no DB Connection will touch it, only then can you toss aside any failsafes and copy any MyISAM table to your heart's content.

UPDATE 2012-02-03 11:47 EDT

If the length of time the rsync takes concerns you, here is an additional suggestion:

You should also make nightly backups of the binary logs. By having nightly copies of binary logs, you would essentially have the incremental backups via those binary logs.

That being said, you could set up a weekly or monthly run of the backup script. Recovery to a specific point-in-time would be your responsibility from there, but you would have every piece of data necessary to do so.