The basic problem you have is that everything is recursive and you are attempting to lock pages of data in InnoDB, creating lots of MVCC info. In some case, you may have attempted to lock the same InnoDB page repeatedly. That explains the deadlocks. I have commented on this situation deeply with someone in three separate posts on this same issue:
You need a workaround. You may want to try the following algorithm:
For this example
Step 01) Make a table to collect all father_ids to be updated
DROP TABLE IF EXISTS fathers_to_update;
CREATE TABLE fathers_to_update
(
father_id INT NOT NULL,
PRIMARY KEY (father_id)
);
Step 02) Making a table that will be a FIFO queue. Put the value of 12 into the queue:
DROP TABLE IF EXISTS fathers_to_queue;
CREATE TABLE fathers_to_queue
(
father_id INT,
id INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (id)
);
INSERT INTO fathers_to_queue (father_id) VALUES (12);
Step 03) Get the Count of fathers_to_queue
SELECT COUNT(1) FROM fathers_to_queue;
If the count is 0, goto Step 07)
Step 04) Remove the front number in fathers_to_queue
SELECT MIN(id),father_id INTO @frontndx,@dad FROM fathers_to_queue;
DELETE FROM fathers_to_queue WHERE id = @frontndx;
Step 05) Collect all father_ids from 161_ft_individual whose id is @dad into the queue
INSERT INTO fathers_to_update SELECT father_id FROM 161_ft_individual WHERE id = @dad;
INSERT INTO fathers_to_queue (father_id) SELECT father_id FROM 161_ft_individual WHERE id = @dad;
Step 06) Go back to Step 03
Step 07) Perform a single UPDATE of all records in 161_ft_individual you collected in fathers_to_update
UPDATE fathers_to_update A
LEFT JOIN 161_ft_individual B
USING (father_id)
SET B.total_leg=B.total_leg+@amount;
That's it. I basically performed a preoder tree traversal back up the hierarchy to all records until father_id was NULL, which makes Step 03 stop collecting father_ids.
Give it a Try !!!
UPDATE 2011-12-16 12:18 EDT
Back on October 24, 2011, I actually wrote an algorithm in the MySQL Stored Procedure language to perform preorder tree traversal: Find highest level of a hierarchical field: with vs without CTEs . Hope it helps !!!
When it comes to databases, less is more.
You are doing the right thing by archiving off old data as this will speed up the performance and reduce the maintenance time of the active database.
The archived data needs to be considered. Keeping it in a separate database instance will be beneficial because backups/restoration and other maintenance activities will be completed separately - reducing the effect on the live data.
You should consider keeping the archived data on a physically different disk to the live data - this will avoid conflicts if both are being used at the same time. Also consider the type of disk this the archive is on as this has cost implications - does it really need to be on a raid type array or is a single disk and backup tape/dvd all that is needed? Storage may be relatively cheap, but it is still a cost in $, IO, time and network.
Next take a look at the archived data - do you really need to keep all the records? do you really need to keep all the columns of data? Would changing the structure of the data result in a smaller disk footprint/faster read query time? Can you summarize the data? Archived data for data mining does not always need to have the same structure as live data.
Keeping separate months data in separate databases may be a good way to go, but there other options that may make the analysis easier/quicker. Your options here include keeping the data in one database instance, but with table per month. Another is to have one set of archive tables and use table partitioning (read the manuals).
Best Answer
There are two things you could so to eliminate table fragmentation and allocate proper caching resources:
Option 1 : Run OPTIMIZE TABLE on every table. This will shrink each table individually and compute index statistics for all tables.
Option 2 : mysqldump everything and reload backup. Naturally, each table gets dropped and recreated without table fragmentation. Indexes are also rebuilt.
You may also want to compute the proper size for InnoDB Buffer Pool and MyISAM Key Cache. Whatever number comes out as the recommended sizes, make sure the combined size of innodb_buffer_pool_size and key_buffer_size does not exceed 75% of the RAM installed on the machine.