MySQL Database Migration – Moving Large Databases with Mysqldump

backupmigrationMySQLmysqldump

I need to move a mysql database that is about 17 GB that is on a Cpanel/Centos based server that I do not have root access, to a Centos server that I have full root access. I do have limited SSH on the Cpanel with mysqldump available(I think).

When doing a test database download from the Cpanel Mysql backup utility, the server ran out of ram and shut down. I only did about 250 MB as a test for the download.

My choices seem to be:

Master/Slave replication(I don't have root so that seems out)
Rsync the database files(I don't have root so that seems out)
Mysqldump
PhpMyadmin(Crashes the server when the tables have too many rows)

At this point I am planning on doing a mysqldump from the Cpanel server. I don't want to have it locked down for a super long time, since our website is live. I saw that I can use the mysqldump with the -r and it will cause the buffer to go row by row instead of loading the whole table into the buffer. Some of the tables have 2 million rows. Most are under 50 thousand rows.

What is the best way to move the database without locking down the system or causing the server to crash because of the ram being overloaded?

Both servers have powerful CPU's and 16 GB of ram with fairly current MySql versions.

Would you recommend a table by table dump or is that even necessary?

I might be able to get the current Cpanel host to setup my.cnf for me to set up the master slave connection(Not sure about that one though)

Best Answer

What you need to do is mysqldump separate tables in batches and load them.

On your new CentOS server, write the following

MYSQL_SOURCE_HOST=10.20.30.40
MYSQL_SOURCE_USER=root
MYSQL_SOURCE_PASS=rootpassword
SRC_CONN="-h${MYSQL_SOURCE_HOST} -u${MYSQL_SOURCE_USER} -p${MYSQL_SOURCE_PASS}"

MYSQL_TARGET_HOST=localhost
MYSQL_TARGET_USER=root
MYSQL_TARGET_PASS=rootpassword
TGT_CONN="-h${MYSQL_TARGET_HOST} -u${MYSQL_TARGET_USER} -p${MYSQL_TARGET_PASS}"

SQL="SELECT table_name FROM information_schema.tables"
SQL="${SQL} WHERE table_schema = 'xcart'"
SQL="${SQL} ORDER BY data_length"
mysql ${SRC_CONN} -ANe"${SQL}" > /tmp/ListOfTables.txt

COMMIT_COUNT=0
COMMIT_LIMIT=10
MYSQLDUMP_OPTIONS="--hex-blob --routines --triggers --events"
for TBL in `cat /tmp/ListOfTables.txt`
do
    mysqldump ${SRC_CONN} ${MYSQLDUMP_OPTIONS} xcart ${TBL} | mysql ${TGT_CONN} -Dxcart &
    (( COMMIT_COUNT++ ))
    if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
    then
        COMMIT_COUNT=0
        wait
    fi
done
if [ ${COMMIT_COUNT} -gt 0 ] ; then wait ; fi

I have other ideas from my old post : How can I optimize a mysqldump of a large database?

Give it a Try !!!