If all your database use InnoDB only, I have some good news.
You should be to dump all the database in parallel from a slave.
In fact, you can force all the databases into the same point-in-time.
First thing to remember about a the Slave is that it is not not required to have binary logging enabled if it is not a Master for other Slaves.
You cannot use --master-data
option for parallel dumps because each dump will have a different position written at line 22 of each dump file. It is better to record the Master's last log file and position the Slave executed using SHOW SLAVE STATUS\G
. That way, all the databases have the same point-in-time position.
You can collect all databases and script the parallel dump of all the database.
DBLIST=/tmp/ListOfDatabasesToParallelDump.txt
BACKUP_BASE=/backups
BACKUP_DATE=`date +"%Y%m%d_%H%M%S"`
BACKUP_HOME=${BACKUP_BASE}/${BACKUP_DATE}
mkdir ${BACKUP_HOME}
cd ${BACKUP_HOME}
mysql -h... -u... -p... -e"STOP SLAVE;"
mysql -h... -u... -p... -e"SHOW SLAVE STATUS\G" > ${SSS}
LOGFIL=`cat ${SSS} | grep "Relay_Master_Log_File" | awk '{print $2}'`
LOGPOS=`cat ${SSS} | grep "Exec_Master_Log_Pos" | awk '{print $2}'`
echo "Master was at ${LOGFIL} Position ${LOGPOS} for this Backup" > Master_Log_FilePos.txt
mysql -h... -u... -p... -AN -e"SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','mysql','performance_schema')" > ${DBLIST}
for DB in `cat ${DBLIST}`
do
mysqldump -h... -u... -p... --hex-blob --routines --triggers ${DB} | gzip > ${DB}.sql.gz &
done
wait
mysql -h... -u... -p... -e"START SLAVE;"
If there are simply too many databases, dump them 10 or 20 at a time as follows:
DBLIST=/tmp/ListOfDatabasesToParallelDump.txt
SSS=/tmp/ShowSlaveStatusDisplay.txt
BACKUP_BASE=/backups
BACKUP_DATE=`date +"%Y%m%d_%H%M%S"`
BACKUP_HOME=${BACKUP_BASE}/${BACKUP_DATE}
mkdir ${BACKUP_HOME}
cd ${BACKUP_HOME}
mysql -h... -u... -p... -e"STOP SLAVE;"
mysql -h... -u... -p... -e"SHOW SLAVE STATUS\G" > ${SSS}
LOGFIL=`cat ${SSS} | grep "Relay_Master_Log_File" | awk '{print $2}'`
LOGPOS=`cat ${SSS} | grep "Exec_Master_Log_Pos" | awk '{print $2}'`
echo "Master was at ${LOGFIL} Position ${LOGPOS} for this Backup" > Master_Log_FilePos.txt
mysql -h... -u... -p... -AN -e"SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','mysql','performance_schema')" > ${DBLIST}
COMMIT_LIMIT=20
COMMIT_COUNT=0
for DB in `cat ${DBLIST}`
do
mysqldump -h... -u... -p... --hex-blob --routines --triggers ${DB} | gzip > ${DB}.sql.gz &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
wait
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
mysql -h... -u... -p... -e"START SLAVE;"
If you need to recover a single table, you can parallel dump tables 20 at a time in size order.
Try this:
TBLIST=/tmp/ListOfTablesToParallelDump.txt
SSS=/tmp/ShowSlaveStatusDisplay.txt
BACKUP_BASE=/backups
BACKUP_DATE=`date +"%Y%m%d_%H%M%S"`
BACKUP_HOME=${BACKUP_BASE}/${BACKUP_DATE}
mkdir ${BACKUP_HOME}
cd ${BACKUP_HOME}
mysql -h... -u... -p... -e"STOP SLAVE;"
mysql -h... -u... -p... -e"SHOW SLAVE STATUS\G" > ${SSS}
LOGFIL=`cat ${SSS} | grep "Relay_Master_Log_File" | awk '{print $2}'`
LOGPOS=`cat ${SSS} | grep "Exec_Master_Log_Pos" | awk '{print $2}'`
echo "Master was at ${LOGFIL} Position ${LOGPOS} for this Backup" > Master_Log_FilePos.txt
mysql -h... -u... -p... -AN -e"SELECT CONCAT(table_schema,'.',table_name) FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','mysql','performance_schema') ORDER BY data_length" > ${DBLIST}
COMMIT_LIMIT=20
COMMIT_COUNT=0
for DBTB in `cat ${TBLIST}`
do
DB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $1}'`
TB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $2}'`
DUMPFILE=$DB-{DB}-TBL-${TB}.sql.gz
mysqldump -h... -u... -p... --hex-blob --routines --triggers ${DB} ${TB} | gzip > ${DUMPFILE} &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
wait
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
mysql -h... -u... -p... -e"START SLAVE;"
Now that you have scripts to dump databases or individual tables, you can load that data at your discretion. If you need to get SQL executed from the binary logs on the master, you can use mysqlbinlog
and give it the position ot datetime and output the SQL to other text files. You just have to perform due diligence to find the amount of data you need from whatever timestamps the bnary logs have. Just remember that every binary log's timestamp in the OS represents that last time it was written.
They are two things that can be good for you in this instance
- GOOD THING #1 : You can convert the two MyISAM tables to InnoDB
- GOOD THING #2 : The two MyISAM tables are never changing
If either one of these good things apply, then I have good news for you
STEP01) Zap All Binary Logs on the Master (OPTIONAL)
Run this command on the Master:
mysql> RESET MASTER;
If this part scares you, you could either skip it or make a copy of the binary logs before doing it
STEP02) Create the mysqldump as follows:
echo "STOP SLAVE;" > MySQLSlaveReload.sql
mysqldump -v --master-data=1 --single-transaction --routines .... >> MySQLSlaveReload.sql
echo "START SLAVE;" >> MySQLSlaveReload.sql
STEP03) Move the MySQLSlaveReload.sql
to the Slave
STEP04) Load MySQLSlaveReload.sql
on the Slave
On the Slave, load the script as follows:
mysql -u... -p... -A < MySQLSlaveReload.sql
The script will stop the slave, load the data, and start the slave. What about the log file and position? Before you do STEP02, look at line 22 of MySQLSlaveReload.sql
head -22 MySQLSlaveReload.sql | tail -1
The mysqldump option --master-data=1
recorded the log file and position as of the start of the mysqldump on line 22.
Give it a Try !!!
Best Answer
A1.
How large are the databases?
If you need logical backup, you can use mysqldump (--single-transaction, --master-data are some options you need to review and use). For speed, you want to go with mydumper/myloader tool. It will also make sure of taking individual dumps.
Advantage here is you can grab the database you need to restore without having touch other databases of the instance. If you choose to go with mysqldump, you still can use mysqldumpsplitter to extract the tables/databases of your choice to export from full dump.
Alternatively, you might want to use the physical backup using Percona's Xtrabackup which provides hotbackup. Here is a post to setup Xtrabackup using Holland framework.
A2.
Backup from Active Master? No!
If we have replication failure on read-only master than priority is to get it fixed and then take the backup. But try to avoid backsups from Active Master. It's also advised to make sure your backups are stored on a different location other than read-only master!
A3.
Binary log backups provide point-in-time restore. You can store binary logs to a separate partition and have it backedup/snapshot or scripted to copy files to a separate location.
Check out these two articles: 1 & 2
Hope this is helpful!!