Mysql – How to i import 150GB database after deleting ibdata1,logfile in MYSQL

innodbMySQLmysql-5.1

I have a 150GB database and my ibdata1 file size is 240GB so I want to shrink the ibdata1 file because I have to export the DB and then drop the tables and then stop mysql, delete the ibdata1 file and then start the =MYSQL again and import the DB again.

If the above solution is correct how can I export and import such huge DBs ? Also, there is partition created in the DB table.

And I am also not sure that I will not lose any data in during this process.

Best Answer

You need to mysqldump all the data (InnoDB and MyISAM) to a local text file and follow your steps.

I have discussed this topic before

You may be concerned about the amount of data to dump. If you have multiple database, perhaps doing a mysqldump of all that data into a single table may not be advisable.

You could perform a mysqldump for each database into separate files

Please read my other posts on how to do separate mysqldumps

UPDATE 2014-04-10 11:21 EDT

Given these facts

  • You have two databases (call them db1 and db2)
  • First Database is 140 GB
  • Second Database is 7 GB

Here are two courses of action

COURSE OF ACTION #1 : Dump Databases Separately

Here is a straightforward mysqldump approach to dump and restore

Dump Databases into Separate Files

USERPASS="-uroot -ppassword"
DBTODUMP=db1
DUMPFILE=${DBTODUMP}_dump.sql
echo "CREATE DATABASE IF NOT EXISTS ${DBTODUMP};" > ${DUMPFILE}
echo "USE ${DBTODUMP}" >> ${DUMPFILE}
mysqldump ${USERPASS} --routines --triggers ${DBTODUMP} >> ${DUMPFILE}
DBTODUMP=db2
DUMPFILE=${DBTODUMP}_dump.sql
echo "CREATE DATABASE IF NOT EXISTS ${DBTODUMP};" > ${DUMPFILE}
echo "USE ${DBTODUMP}" >> ${DUMPFILE}
mysqldump ${USERPASS} --routines --triggers ${DBTODUMP} >> ${DUMPFILE}

Restore Databases (after making new ibdata1 and logfiles)

USERPASS="-uroot -ppassword"
DBTODUMP=db1
DUMPFILE=${DBTODUMP}_dump.sql
mysql ${USERPASS} < ${DUMPFILE} &
DBTODUMP=db2
DUMPFILE=${DBTODUMP}_dump.sql
mysql ${USERPASS} < ${DUMPFILE} &
wait

COURSE OF ACTION #2 : Dump Tables Separately

Here is code to dump all the tables 5 at a time

USERPASS="-uroot -ppassword"
TABLESTODUMP=/tmp/TablesToDump.txt
DUMP_LOCATION=/some/path/you/choose
SQL="SELECT CONCAT(table_schema,':',table_name)"
SQL="${SQL} FROM information_schema.tables"
SQL="${SQL} WHERE table_schema IN ('db1','db2')"
mysql ${USERPASS} -ANe"${SQL}" > ${TABLESTODUMP}
COMMIT_COUNT=0
COMMIT_LIMIT=5
for DBTB in `cat ${TABLESTODUMP}`
do
    DB=`echo "${DBTB}" | sed 's/:/ /g' | awk '{print $1}'`
    TB=`echo "${DBTB}" | sed 's/:/ /g' | awk '{print $2}'`
    DUMPFILE=${DUMP_LOCATION}/${DB}_${TB}.sql
    echo "CREATE DATABASE IF NOT EXISTS ${DB};" > ${DUMPFILE}
    echo "USE ${DB}" > ${DUMPFILE}
    mysqldump ${USERPASS} ${DB} ${TB} >> ${DUMPFILE} &
    (( COMMIT_COUNT++ ))
    if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
    then
        COMMIT_COUNT=0
        wait
    fi
done
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ] ; then wait ; fi

Restore Databases (after making new ibdata1 and logfiles)

DUMP_LOCATION=/some/path/you/choose
cd ${DUMP_LOCATION}
for DUMPFILE in `ls *.sql` ; do mysql < ${DUMPFILE} ; done

CAVEAT

Make sure DUMP_LOCATION has enough space