So the three questions I'm hoping someone can answer are:
1.Should I be using a CURSOR here or is there an alternative that will get the same recursive result?
No, you should not. The CURSOR keep being opened over and over. Thinking of the overhead make me cringe. Personally, I stay away from CURSORs.
2.How can I get the results back in a single result set so that it can be used in the same fashion as a subselect?
3.What is the proper way of using the results from the CALL because I can't at least as far as I've tried get the sample SELECT statement above to work? I believe this is because I can't use CALL inline but I'm not sure.
I would like to bail you out of this by suggesting something I learned from my college days when learning about data structures. You need to perform tree traversal using a queue. This allows you to start at a root and express all descendants of a tree.
The algorithm goes like this
- STEP 01 : Start with an empty queue
- STEP 02 : Dequeue Node From the Front of the Queue
- STEP 03 : Enqueue All Children of the Latest Node
- STEP 04 : Process Info From the Latest Node
- STEP 05 : If the Queue is Not Empty, Go Back to STEP 02
- STEP 06 : All Done
This allows you to traverse a recursive structure without using Programmatic Recursion. At this point, you are probably asking: How can I traverse a tree structure without recursion?
I wrote a post about how to script three Stored Procedures that can using a loop in a single CALL that will traverse a table with nodes and its parent in a table:
- GetParentIDByID
- GetAncestry
- GetFamilyTree
The post is Find highest level of a hierarchical field: with vs without CTEs (Oct 24, 2011). It contains the Stored Procedures already written that will traverse the following table structure:
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| parent_id | int(11) | YES | | NULL | |
| name | varchar(255) | YES | | NULL | |
| notes | text | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
Please read the code carefully and apply the principles.
Give it a Try !!!
There are multiple design changes to make.
Use InnoDB, not MyISAM. You need row-level locking, not table-level locking. (And set innodb_flush_log_at_trx_commit = 2) A blog on conversion tips: http://mysql.rjweb.org/doc.php/myisam2innodb . A MyISAM INSERT always blocks a SELECT. An InnoDB INSERT rarely blocks a SELECT.
Rethink your counters. You have reached the "limit" of bumping counters on the fly. Instead of bumping them one at a time, batch them. One approach is to write the actions to a table, then do batch operations on it to update the various counters.
At 200/sec or more, I recommend ping-ponging two staging tables, as described here: http://mysql.rjweb.org/doc.php/staging_table Note that you can also batch the INSERTs with this technique, thereby speeding them up considerably.
Get rid of all the triggers, unless you can explain something that won't be efficiently covered by the above suggestions.
180s for an insert -- that is what can happen if connections and queries pile up. Your Max_used_connections and Threads_running were probably very high. More than a few dozen for either of those can imply things like "180s".
It would help to know the "flavor" of the application -- Data Warehousing? MOOG? Search?
EDIT:
Yes, convert all of your tables to InnoDB. Then lower key_buffer_size to 40M and raise innodb_buffer_pool_size to 70% of RAM. (Reference: http://mysql.rjweb.org/doc.php/memory )
max_heap_table_size and tmp_table_size, currently at 512M are dangerously high, especially when hundreds of queries are running at the same time. Perhaps no more than 1% of RAM is wise. This change may have no impact, or it may impact large, complex, SELECTs. But the change should help prevent swapping, which could be the real problem you are experiencing. (Swapping is terrible for MySQL.)
Best Answer
There are two types of "caching" in MySQL
Both have configurable limits.
"Cache" implies that a finite amount of space is reserved. When you "run out" of the space, something is bumped out of cache to make room for the new item. The one bumped out is usually the "least recently used", which, without other info, is the least likely to be needed again soon.
So, I don't see what the problem is.
On the other hand, maybe you are asking a different question... With the "Query cache",
SELECTs
are saved in it. This cache keeps the queries and their result sets. (Again, when it fills up, old entries are bumped out to make room for new ones.) A major drawback of the QC is that when any write occurs, all entries in the QC for that table are purged. So, for tables that are frequently written to, the QC is virtually useless.For the few systems that can effectively use the QC, here are my recommendations: