PLEASE SEE MY WINDOWS ANSWER DOWN BELOW BY THE UPDATE 2012-05-22 12:42 EDT
You need to make backups of each database separately
First, go get all your databases
MYSQL_CONN="-uusername -ppassword"
SQLSTMT="SELECT DISTINCT table_schema FROM information_schema.tables"
SQLSTMT="${SQLSTMT} WHERE table_schema NOT IN"
SQLSTMT="${SQLSTMT} ('information_schema','mysql','performance_schema')"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/ListOfDatabases.txt
BACKUP ALGORITHM #1
If you have a few databases to backup, you can dump all the databases is parallel:
MYSQL_CONN="-uusername -ppassword"
MYSQLDUMP_OPTIONS="--hex-blob --routines --triggers"
for DB in `cat /tmp/ListOfDatabases.txt`
do
BACKUP_GZIP=${DB}.sql.gz
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_GZIP} &
done
wait
BACKUP ALGORITHM #2
If there are too many databases to launch at the same time, dump 10 at a time:
MYSQL_CONN="-uusername -ppassword"
MYSQLDUMP_OPTIONS="--hex-blob --routines --triggers"
COMMIT_COUNT=0
COMMIT_LIMIT=10
for DB in `cat /tmp/ListOfDatabases.txt`
do
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${DB}.sql.gz &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
BACKUP ALGORITHM #3
If there are simply way too many databases or the some of the databases are simply too large for frequent backups, you may want to think about making backups like this:
- On the 1st of the month, dump the Separate Databases Using
BACKUP ALGORITHM #2
- Every hour, make copies to the binary logs (forming a binary log archive)
That way, restoration of a database can be done in a point-in-time fashion.
For example, to get database mydb
as for 2012-05-20 09:00:00 you do this:
- Restore Database Backup from 2012-05-01 00:00:00
- Replay all archived binary logs using the mysqlbinlog program
- use option
--start-datetime="2012-05-01 00:00:00"
- use option
--stop-datetime="2012-05-20 09:00:00"
- use option
--database=mydb
CONCLUSION
There may be other considerations to these scripts you may want to factor in. You may also want to use Open Source/Commerical programs to get the backups in the way most comfortable for you to use.
I would further recommend using MySQL Replication. That way these scripts can be executed on the Slave in this manner (as recommended in MySQL 5.0 Certification Study Guide, Page 462 Section 32.6 under the Heading "Replication as an Aid to Backup"):
- STOP SLAVE SQL_THREAD;
- Run the Backup
- START SLAVE SQL_THREAD;
This would also keep the backups from interefering with a production Master.
UPDATE 2012-05-22 12:42 EDT
Oh, you are running Windows 2008? That is a whole different ballgame.
I actually wrote a post in StackOverflow on dumping separate databases in Windows back in March of 2012.
I will post my answer from StackOverflow right now...
Have the information_schema database construct a DOS Batch File to perform the mysqldumps in parallel
set MYSQLUSER=root
set MYSQLPASS=1234
set BATCHFILE=S:\Backup\MySQL\Batch_mysqldump.bat
set DUMPPATH=S:\Backup\MySQL
echo @echo off > %BATCHFILE%
echo cd %DUMPPATH% >> %BATCHFILE%
mysql -u%MYSQLUSER% -p%MYSQLPASS% -AN -e"SELECT CONCAT('start mysqldump -u%MYSQLUSER% -p%MYSQLPASS% --routines --triggers ',schema_name,' > ',schema_name,'.sql') FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" >> %BATCHFILE%
type %BATCHFILE%
Just run like any DOS Batch File
Make sure you have the correct username and password to connect to mysql
I just tried it out to make sure
C:\>set MYSQLUSER=lwdba
C:\>set MYSQLPASS=<hidden>
C:\>set BATCHFILE=C:\LWDBA\Batch_mysqldump.bat
C:\>set DUMPPATH=C:\LWDBA
C:\>echo @echo off > %BATCHFILE%
C:\>echo cd %DUMPPATH% >> %BATCHFILE%
C:\>mysql -u%MYSQLUSER% -p%MYSQLPASS% -AN -Bse"SELECT CONCAT('start mysqldump -u%MYSQLUSER% -p%MYSQLPASS% --routines --triggers ',schema_nam
e,' > ',schema_name,'.sql') FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" >> %BATCH
FILE%
C:\>type %BATCHFILE%
@echo off
cd C:\LWDBA
start mysqldump -ulwdba -phidden --routines --triggers a1ex07 > a1ex07.sql
start mysqldump -ulwdba -phidden --routines --triggers annarbor > annarbor.sql
start mysqldump -ulwdba -phidden --routines --triggers dilyan_kn > dilyan_kn.sql
start mysqldump -ulwdba -phidden --routines --triggers dtest > dtest.sql
start mysqldump -ulwdba -phidden --routines --triggers dude > dude.sql
start mysqldump -ulwdba -phidden --routines --triggers example > example.sql
start mysqldump -ulwdba -phidden --routines --triggers fed > fed.sql
start mysqldump -ulwdba -phidden --routines --triggers friends > friends.sql
start mysqldump -ulwdba -phidden --routines --triggers giannosfor > giannosfor.sql
start mysqldump -ulwdba -phidden --routines --triggers javier > javier.sql
start mysqldump -ulwdba -phidden --routines --triggers johnlocke > johnlocke.sql
start mysqldump -ulwdba -phidden --routines --triggers junk > junk.sql
start mysqldump -ulwdba -phidden --routines --triggers lovesh > lovesh.sql
start mysqldump -ulwdba -phidden --routines --triggers mysql > mysql.sql
start mysqldump -ulwdba -phidden --routines --triggers nwwatson > nwwatson.sql
start mysqldump -ulwdba -phidden --routines --triggers part > part.sql
start mysqldump -ulwdba -phidden --routines --triggers preeti > preeti.sql
start mysqldump -ulwdba -phidden --routines --triggers prefixdb > prefixdb.sql
start mysqldump -ulwdba -phidden --routines --triggers replagdb > replagdb.sql
start mysqldump -ulwdba -phidden --routines --triggers rollup_test > rollup_test.sql
start mysqldump -ulwdba -phidden --routines --triggers sample > sample.sql
start mysqldump -ulwdba -phidden --routines --triggers stuff > stuff.sql
start mysqldump -ulwdba -phidden --routines --triggers table_test > table_test.sql
start mysqldump -ulwdba -phidden --routines --triggers tagmediatest > tagmediatest.sql
start mysqldump -ulwdba -phidden --routines --triggers targetdb > targetdb.sql
start mysqldump -ulwdba -phidden --routines --triggers test > test.sql
start mysqldump -ulwdba -phidden --routines --triggers test_mysqldb > test_mysqldb.sql
start mysqldump -ulwdba -phidden --routines --triggers tostinni > tostinni.sql
start mysqldump -ulwdba -phidden --routines --triggers user1267617 > user1267617.sql
start mysqldump -ulwdba -phidden --routines --triggers user391986 > user391986.sql
start mysqldump -ulwdba -phidden --routines --triggers utility > utility.sql
start mysqldump -ulwdba -phidden --routines --triggers veto > veto.sql
start mysqldump -ulwdba -phidden --routines --triggers vito > vito.sql
start mysqldump -ulwdba -phidden --routines --triggers zipcodes > zipcodes.sql
Give it a Try !!!
EDIT By JakeJ:
When ran, the solution just outputted the data into CMD, then closed it without writing it to any file, however this now saves the data to the .SQL file created.
set MYSQLUSER=root
set MYSQLPASS=pass
set BATCHFILE=C:\Backup\MySQL\Batch_mysqldump.bat
set DUMPPATH=C:\Backup\MySQL
echo @echo off > %BATCHFILE%
echo cd %DUMPPATH% >> %BATCHFILE%
cd C:\Program Files\MySQL\MySQL Server 5.5\bin\
mysql.exe -u%MYSQLUSER% -p%MYSQLPASS% -AN -e"SELECT CONCAT('mysqldump -u%MYSQLUSER% -p%MYSQLPASS% --routines --triggers ',schema_name,' > ',schema_name,'.sql') FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" >> %BATCHFILE%
type %BATCHFILE%
cd C:\Backup\MySQL
Batch_mysqldump.bat
Best Answer
Note that
mysqlbackup
is not a hot backup solution. For databases the size of your one, there really is no debate; see the related Q & A Mysql backup strategies?As Shlomi Noach (GitHub's MySQL guy) says in his answer, Percona's XtraBackup is your go-to choice. It is also worth reading MySQL Backup and Restore Best Practices by Krzysztof Ksiazek.
By the way, no backup strategy on earth will protect you from devs/DBAs deleting records/dropping tables, that's why you should be using incremental backups; maybe LVM snapshots or flush logs. However, incremental backups are also performed by XtraBackup. See also Example Backup and Recovery Strategy in the MySQL reference manual.
On a more general note, broad approaches to this issue are discussed in:
In any case, on a production system, devs (or DBAs for that matter) should not be wandering through the filesystem and/or database deleting stuff ad libitum. Any and all changes on production should have been performed at least twice on test/UAT or similar systems before being implemented in prod! This is best practice.