If you have a schema already in place, you should use mysqldump options to create the dump of the data to perform in the inserts only
mysqldump -h... -u... -p... --no-create-info --databases ... > MySQLData.sql
You should also make sure to raise the bulk_insert_buffer_size (default is 8M) on all DB Servers to accommodate large extended inserts. This will also help LOAD DATA INFILE if loading non-empty tables.
You are going to have to also adjust your max_allowed_packet (default is 1M).
Try these settings for starters:
[mysqld]
max_allowed_packet=256M
bulk_insert_buffer_size=256M
UPDATE 2011-10-11 06:53 EDT
Regardless of how beefy the hardware is, how much RAM you have allocated, how well tuned the OS is, and how current the version of MySQL is, MySQL will only perform as well as it is configured.
Example: If you have a swimming pool that can hold 10,000 gallons of water, you have a truck holding 10,000 gallons of water, and the hose on the truck is only as big as a straw, you can only push but so much water through the straw to fill the pool. Getting a bigger truck or getting a faster water pump simply will not improve things. You must exchange the hose for a much bigger hose to accommodate more water, thus, more throughput.
In like fashion, MySQL 5.5 out-of-the-box does not come fully tuned.
Example #1: MySQL 5.5 comes with semisynchronous replication. By default, it is disabled. You must do the two step process of starting mysql, runnning INSTALL PLUGIN on master and slave modules, shutdown mysql, add timing and activation options to my.cnf for the semisych features, then starting up mysql for the second time. Only then will semisync replication work.
Example #2: The bulk insert buffer by default is 8M. The bulk insert buffer will not grow because of the presence of any specific hardware or software. It stays 8M until you increase it. It is possible to increase it by either adding it to my.cnf and restarting mysql or running SET bulk_insert_buffer_size = 268435456;
to set it to 256M within a session and then load the mysqldump within that same session.
Example #3: MySQL 5.5 is fully capable of engaging multiple CPUs. By default, the features for engaging multiple CPUs is disabled. They require tuning because although MySQL 5.5 is multicore ready, MySQL 5.5 is only as multithreaded as you configure it.
Conclusion: You must configure MySQL to recognize that it has beefy hardware, more RAM, and a cooperative OS at its disposal.
If you are dumping a mysql database that has a mixture of InnoDB and MyISAM and you have scheduled downtime:
mysqldump -u... -p... --master-data=1 --single-transaction --flush-privileges --routines --triggers --all-databases > /root/mydata.sql
Since --opt is enabled by default, the following options are already enabled
--add-drop-table
--add-locks
--create-options
--quick
--lock-tables
--set-charset
--disable-keys
If you are concerned about InnoDB/MyISAM together in the mysqldump and you want the database placed in a read-only state, try putting a read lock across all tables manually.
MYSQL_CONN="-hhostip -uusername -ppassword"
mysql ${MYSQL_CONN} -A -e"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
SEARCHING_FOR_SLEEP=0
while [ ${SEARCHING_FOR_SLEEP} -eq 0 ]
do
sleep 3
SEARCHING_FOR_SLEEP=`${MYSQL} ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep -c "SELECT SLEEP(86400)"`
done
sleep 1
SLEEP_ID=`mysql ${MYSQL_CONN} -A -e"SHOW PROCESSLIST;" | grep "SELECT SLEEP(86400)" | awk '{print $1}'`
mysqldump ${MYSQL_CONN} --master-data=2 --single-transaction --flush-privileges --routines --triggers --all-databases > /root/mydata.sql
mysql ${MYSQL_CONN} -A -e"KILL ${SLEEP_ID}"
The reason this is better to do is that --single-transaction
does not protect MyISAM from changing during the dump.
Before launching the mysqldump, make sure binary logging is enabled on the master. If it is not, do the following:
Step 01) Add this to /etc/my.cnf on the master
[mysqld]
log-bin=mysql-bin
Step 02) service mysql restart
This will enable binary logging on the master.
On the new slave, you can run the following command:
CHANGE MASTER TO
MASTER_HOST='IP of the master',
MASTER_PORT=3306,
MASTER_USER='whatever_username',
MASTER_PASSWORD='whatever_password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=107;
Make sure the rep_username exists in the master. If it does not, run this command on the master:
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT
ON *.* TO 'whatever_username'@'%' IDENTIFIED BY 'whatever_password';
Then do the mysqldump to /root/mydata.sql.
Move the /root/mydata.sql from the master to the slave.
Next, execute in the mysql client this command on the slave:
source /root/mydata.sql
This will load mysqldump in the slave and the set the correct log file and log position in /var/lib/mysql/master.info.
Finally, run START SLAVE;
on the slave and mysql replication should get going.
Run SHOW SLAVE STATUS\G
to check replication status.
Give it a Try !!!
Best Answer
Plan A: Provide a
WHERE
clause that exactly covers that partition.Plan B: Let's discuss (1) why you are using partitioning, and (2) why you need that kind of dump.
Plan C: See "transportable tablespaces" as a way to break off a partition, turning it into an independent table.