According to the MySQL Performance Blog
Lets start with a bit of background – in Innodb row level locks are
implemented by having special lock table, located in the buffer pool
where small record allocated for each hash and for each row locked on
that page bit can be set. This in theory can give overhead as low as
few bits per row, so lets see how it looks in practice
The same reference brings this out...
So we locked over 100K rows using about 44KB. This is still quite
efficient using less than 4 bits per locked row.
In practice this means memory consumption by row level locks should not be
the problem even for rather large databases – even billion of locked rows should
take half GB of memory, which is small fraction of memory used on
serious systems. Furtermore you would unlikely need or want to lock
every row in your table/database which makes it even smaller problem.
Evidently, all 17 million rows need to be locked. Based on the link from mysqlperformanceblog.com:
- 100,000 rows takes 44 KB
- 17,000,000 rows should take 7480 KB, just over 7 MB
I can easily see an unconfigured innodb_buffer_pool_size (Default is 128M in MySQL 5.5, and 8M before MySQL 5.5) causing a problem.
OK Let's get back to your questions:
Why exactly are there so many locks, i.e. what is being locked in this
query?
Every row in an InnoDB table must be locked. This makes the data available via MVCC, ACID Compliance, and Transaction Isolation.
Why didn't locking the table solve the problem? Do I need to do the
lock in a different way or lock something else as well?
Same as Answer 1
If the only solution is to increase the innodb_buffer_pool_size
, how
large does it need to be for this query?
There must be enough RAM to not only hold your working set of InnoDB data and index pages, but also small amount of overhead (44KB per 100,000 rows) for table locks.
Given the Following
+<------+
| ^
V |
M1 M2
| ^
V |
+-------+
and You Want the Following
+<------+
| ^
V |
S1<---M1 M2
| ^
V |
+-------+
Given Master IP is 10.1.2.30
Logon to the Slave and run the mysqldump like this
MAS_IP=10.1.2.30
SLV_IP=localhost
MYSQL_MAS_CONN="${MAS_IP} -uroot -ppassword"
MYSQL_SLV_CONN="${SLV_IP} -uroot -ppassword"
MYSQLDUMP_OPTIONS="--single-transaction --master-data=1 --flush-privileges"
MYSQLDUMP_OPTIONS="${MYSQLDUMP_OPTIONS} --routines --triggers"
MYSQLDUMP_OPTIONS="${MYSQLDUMP_OPTIONS} --add-drop-database --all-databases"
mysql ${MYSQL_MAS_CONN} -e"FLUSH LOGS;"
echo "STOP SLAVE;" > MySQLData.sql
mysqldump ${MYSQL_MAS_CONN} ${MYSQLDUMP_OPTIONS} >> MySQLData.sql
echo "START SLAVE;" >> MySQLData.sql
mysql ${MYSQL_MAS_CONN} < MySQLData.sql
If you have already performed this, let me address your comments.
But upon adding the third DB server and starting slave on it, it throws Cannot add or update a child row. foreign key constraint fails error.
Why would foreign key problems happen if you copied the data from a Master to a Slave? The auto_increment ids on the Slave simply did not match that of the Master. This is true even if you the Slave's Master is where you got the data from. Why?
auto_increment_increment is set to 2 in both masters. auto_increment_offset is set to 1 in Master 1 and 2 in Master 2. We have mostly innoDB tables and some MyISAM tables.
You have to make sure the Slave has the exact same auto_increment_increment
and auto_increment_offset
as the Master you are replicating from.
Goto the Master and run
SHOW VARIABLES LIKE 'auto_increment%';
Put those values into the Slave's my.cnf
and restart mysql.
REMEMBER : A Slave's auto_increment behavior needs to be identical to that of its master.
Give it a Try !!!
UPDATE 2013-04-06 17:26
Here is the problem: Let's say you have the setup
+<------+
| ^
V |
S1<---M1 M2
| ^
V |
+-------+
If you did the following:
- mysqldump data from M1
- load dump it S1
- enabled Replication from M1 to S1
- M1 and S1 have identical auto_increment_offset
- M1 and S1 have identical auto_increment_increment
there should never be foreign key violations.
What you may need to do is cleanup every server's view of the auto increment values
Here is something to try
- Set these values on M1's /etc/my.cnf
- auto_increment_increment = 10
- auto_increment_offset = 1
- Set these values on M2's /etc/my.cnf
- auto_increment_increment = 10
- auto_increment_offset = 1
- Set these values on S1's /etc/my.cnf
- auto_increment_increment = 10
- auto_increment_offset = 1
Restart mysql on all three servers
Load Data like this:
- mysqldump from M1 to
- Run SET GLOBAL sql_log_bin = 0; on M1
- Load the mysqldump into M1
- Run SET GLOBAL sql_log_bin = 1; on M1
- Run SET GLOBAL sql_log_bin = 0; on M2
- Load the mysqldump into M2
- Run SET GLOBAL sql_log_bin = 1; on M2
- Load the mysqldump into S1
- Reset Replication amongst the threee server
If nothing breaks, you can then do this
Set these values on M2's /etc/my.cnf
- auto_increment_increment = 10
- auto_increment_offset = 2
Restart mysql on M2
Everything should be clean now. From here, INSERTs should be properly handled.
Give it a Try !!1
Best Answer
A guess:
Even though the table is effectively locked,
SHOW OPEN TABLES
is oblivious to it.LOCK TABLES
is a very old DDL statement in MySQL. It was probably a kludge to let users manually simulate transactions. At about the same time (version 3?)SHOW OPEN TABLES
was added.Once InnoDB was added, it became clear that
LOCK TABLES
was no longer needed; InnoDB does row-level locking for DML and transactions.There are a few DDL statements that effectively "lock" a table even in InnoDB, but they probably come through a different part of the code. That is (remember, I am guessing),
LOCK TABLES
andSHOW OPEN TABLE
know about each other, butALTER
runs in a different circle.Feel free to complain at bugs.mysql.com .
I checked 122 systems; according to
Com_show_open_tables
, only 11 usedSHOW OPEN TABLES
since startup.