The basic problem you have is that everything is recursive and you are attempting to lock pages of data in InnoDB, creating lots of MVCC info. In some case, you may have attempted to lock the same InnoDB page repeatedly. That explains the deadlocks. I have commented on this situation deeply with someone in three separate posts on this same issue:
You need a workaround. You may want to try the following algorithm:
For this example
Step 01) Make a table to collect all father_ids to be updated
DROP TABLE IF EXISTS fathers_to_update;
CREATE TABLE fathers_to_update
(
father_id INT NOT NULL,
PRIMARY KEY (father_id)
);
Step 02) Making a table that will be a FIFO queue. Put the value of 12 into the queue:
DROP TABLE IF EXISTS fathers_to_queue;
CREATE TABLE fathers_to_queue
(
father_id INT,
id INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (id)
);
INSERT INTO fathers_to_queue (father_id) VALUES (12);
Step 03) Get the Count of fathers_to_queue
SELECT COUNT(1) FROM fathers_to_queue;
If the count is 0, goto Step 07)
Step 04) Remove the front number in fathers_to_queue
SELECT MIN(id),father_id INTO @frontndx,@dad FROM fathers_to_queue;
DELETE FROM fathers_to_queue WHERE id = @frontndx;
Step 05) Collect all father_ids from 161_ft_individual whose id is @dad into the queue
INSERT INTO fathers_to_update SELECT father_id FROM 161_ft_individual WHERE id = @dad;
INSERT INTO fathers_to_queue (father_id) SELECT father_id FROM 161_ft_individual WHERE id = @dad;
Step 06) Go back to Step 03
Step 07) Perform a single UPDATE of all records in 161_ft_individual you collected in fathers_to_update
UPDATE fathers_to_update A
LEFT JOIN 161_ft_individual B
USING (father_id)
SET B.total_leg=B.total_leg+@amount;
That's it. I basically performed a preoder tree traversal back up the hierarchy to all records until father_id was NULL, which makes Step 03 stop collecting father_ids.
Give it a Try !!!
UPDATE 2011-12-16 12:18 EDT
Back on October 24, 2011, I actually wrote an algorithm in the MySQL Stored Procedure language to perform preorder tree traversal: Find highest level of a hierarchical field: with vs without CTEs . Hope it helps !!!
...even surpassing it's theorically maximum possible allocation.
[OK] Maximum possible memory usage: 7.3G (46% of installed RAM)
There is not actually a way to calculate maximum possible memory usage for MySQL, because there is no cap on the memory it can request from the system.
The calculation done by mysqltuner.pl is only an estimate, based on a formula that doesn't take into account all possible variables, because if all possible variables were taken into account, the answer would always be "infinite." It's unfortunate that it's labeled this way.
Here is my theory on what's contributing to your excessive memory usage:
thread_cache_size = 128
Given that your max_connections
is set to 200, the value of 128 for thread_cache_size
seems far too high. Here's what makes me think this might be contributing to your problem:
When a thread is no longer needed, the memory allocated to it is released and returned to the system unless the thread goes back into the thread cache. In that case, the memory remains allocated.
http://dev.mysql.com/doc/refman/5.6/en/memory-use.html
If your workload causes even an occasional client thread to require a large amount of memory, those threads may be holding onto that memory, then going back to the pool and sitting around, continuing to hold on to memory they don't technically "need" any more, on the premise that holding on to the memory is less costly than releasing it if you're likely to need it again.
I think it's worth a try to do the following, after first making a note of how much memory MySQL is using at the moment.
Note how many threads are currently cached:
mysql> show status like 'Threads_cached';
+----------------+-------+
| Variable_name | Value |
+----------------+-------+
| Threads_cached | 9 |
+----------------+-------+
1 row in set (0.00 sec)
Next, disable the thread cache.
mysql> SET GLOBAL thread_cache_size = 0;
This disables the thread cache, but the cached threads will stay in the pool until they're used one more time. Disconnect from the server, then reconnect and repeat.
mysql> show status like 'Threads_cached';
Continue disconnecting, reconnecting, and checking until the counter reaches 0.
Then, see how much memory MySQL is holding.
You may see a decrease, possibly significant, and then again you may not. I tested this on one of my systems, which had 9 threads in the cache. Once those threads had all been cleared out of the cache, the total memory held by MySQL did decrease... not by much, but it does illustrate that threads in the cache do release at least some memory when they are destroyed.
If you see a significant decrease, you may have found your problem. If you don't, then there's one more thing that needs to happen, and how quickly it can happen depends on your environment.
If the theory holds that the other threads -- the ones currently servicing active client connections -- have significant memory allocated to them, either because of recent work in their current client session or because of work requiring a lot of memory that was done by another connection prior to them languishing in the pool, then you won't see all of the potential reduction in memory consumption until those threads are allowed to die and be destroyed. Presumably your application doesn't hold them forever, but how long it will take to know for sure whether there's a difference will depend on whether you have the option of cycling your application (dropping and reconnecting the client threads) or if you'll have to just wait for them to be dropped and reconnected over time on their own.
But... it seems like a worthwhile test. You should not see a substantial performance penalty by setting thread_cache_size
to 0. Fortunately, thread_cache_size
is a dynamic variable, so you can freely change it with the server running.
Best Answer
As you are saying that most of the tables are MyISAM, I would check IOs and locking.
You can verify you disk throughput with a tool like iotop.
http://www.manpagez.com/man/1/iotop/
MyISAM relies on the file system cache to store data blocks, so MySQL has to compete with all the other processes running on your OS for access to that cache. The key_buffer only applies to index blocks.
http://dev.mysql.com/doc/refman/5.0/en/myisam-key-cache.html
If you notice that MySQL is generating a lot of IO activity, that may mean that your OS is starved for RAM in spite of the 16GB you say is available.
I ran into a similar problem with a Wordpress install lately that relied heavily on two 2GB MySIAM tables.
If your SELECT statements are trying to access data from these same tables that are being modified at the same time by (DML) INSERT, UPDATE or DELETE queries, that might explain why table_locks_waited is so high on your system. Those DML queries could be coming from a website, a cron job or some other process. If locking contention is the problem, you may want to consider converting the table engine to InnoDB for those tables that are causing the most locking. Tools like Jet Profiler and Innotop can help you to get that specific information.