Mysql – Insert into table select – Replication lag – Percona Server 5.6

MySQLmysql-5.6perconapercona-serverreplication

I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup.

I have set the below configuration:

  1. ROW based replication is set.
  2. Transaction Isolation is set to read-committed.

Today, there was a insert going on in my Master. It was in the format

INSERT INTO table1 SELECT * FROM table2

Table 2 has 200 million rows.

Though the number of insert records was only 5000 but the operation lasted for 30 mins. I observed replication lag during the insert operation.

I have load infile disabled due to security concerns. Hence I can't insert using that as well.

I went this article from Percona which says that this can be resolved if txn isolation is used as ROW and versions above 5.1 that this is resolved.

  1. In what way I can make my slave to be in sync with Master in such conditions?
  2. Why does the slave lag here?

Best Answer

The semantics of that statement is to do it all in a single transaction. This may well exceed innodb_lock_wait_timeout and/or cause other operations to exceed it.

With RBR, each of the 200M rows will be separately fed through the replication stream; this is a huge amount of data (more than the table size, itself).

The Slave cannot handle any other replication during the execution of the 200M individual inserts.

What to do? First, explain more about the situation.

  • Does table2 change during the operation?
  • Does table2 exist on the slave?
  • Let's see SHOW CREATE TABLE table. (Need to see indexes, FKs, etc)
  • Is the schema for table1 identical to table2?
  • Can you switch to SBR for this one statement? That will solve the replication stream bloat, but not the other comments.
  • Will it be OK to copy the rows over in chunks of, say, 1000 rows? Each chunk being one transaction? (That is, if there is a crash, the table may be partially copied.) If so, adapt the tips in http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks .