Basically, I think you should just get the max date, if that is all you are looking for here, using your same filter, etc. You already have STATUS (=COMP) and WONUM (JOIN). If you needed the whole record from this table, and it was more complicated than this, I would recommend the oracle inline analytic functions with over/partition by logic to filter by the max date.
SELECT *
FROM WORKORDER
LEFT OUTER JOIN (SELECT WONUM AS STATUSWONUM
, STATUS AS STATUS
, MAX(CHANGEDATE) AS STATUSCHANGEDATE
FROM WOSTATUSHISTORY
WHERE STATUS = 'COMP'
GROUP BY WONUM, STATUS)LASTCOMPLETE
ON ( WORKORDER.WONUM = LASTCOMPLETE.STATUSWONUM )
;
Why generate a temporary that drops on disconnect ? You would have to reload the table of ids, which requires extra code to manage the emptying and reloading. If you create a subquery that limits to 100 ids at a time, you could join to the cache_table:
Just create an inline query for the first 1000 rows, then join
SELECT ct.column_names FROM
(SELECT id FROM comments WHERE user_id = 123 LIMIT 1000) c
LEFT JOIN cache_table ct ON c.id = ct.id;
If you have to paginate, adjust in the inline query to paginate on the IDs then join last
SELECT ct.column_names FROM
(SELECT id FROM comments WHERE user_id = 123 LIMIT 0,100) c
LEFT JOIN cache_table ct ON c.id = ct.id;
SIDE NOTE: I highly advise against creating temporary InnoDB tables. They are much harder to cleanup after a crash because of leftover entries in the ibdata1 file (home of the data dictionary). MyISAM temporary tables are easy to cleanup just by simply deleting the .MYD
and .MYI
files in the /tmp
folder.
UPDATE 2014-12-22 18:19 EST
It sounds like you want Dynamic SQL based on the row count of tmp
CREATE TEMPORARY TABLE IF NOT EXISTS tmp ENGINE=MyISAM
SELECT id FROM comments WHERE user_id = 123;
SELECT COUNT(1) INTO @reccount FROM tmp;
SET @pg_origin = ???;
SET @pg_offset = 100;
SET @q1 = 'SELECT column_names FROM comments c JOIN tmp t ON c.id = t.id';
SET @q2 = CONCAT('SELECT column_names FROM comments c JOIN cache_table ct ON c.id = ct.id LIMIT ',@pg_offset,',',@pg_origin,';');
SET @sql = IF(@reccount < 1000,@q1,@q2);
PREPARE s FROm @sql; EXECUTE s; DEALLOCATE PREPARE s;
I still strongly recommend using MyISAM for the temp table since the row count is written in the .MYD
header of the file (O(1) execution time to get the count) whereas an InnoDB temp table required physically counting the rows (O(n) execution time to get the count).
As for the pagination, all you would need is to set @pg_origin to set where to begin retrieval.
As for InnoDB temp tables, sometimes they do not properly drop on disconnect and leave pigeon holes in ibdata1. I wrote about this problem about 2.5 years ago
UPDATE 2014-12-23 13:08 EST
Your last comment
Can you think about some other suggestions except UPDATE 2014-12-22 18:19 EST? Is it possible to somehow eliminate the use of a temporary table?
If you look at the UPDATE 2014-12-22 18:19 EST
, you'll notice that it only creates and loads the table once because I use CREATE TEMPORARY TABLE IF NOT EXISTS tmp
instead of CREATE TEMPORARY TABLE tmp
.
If you know user_id 123 has new comments you would drop, create and load the temp table.
Even if you don't want the temp table at all, you need to get the count anyway. So, go get count, move it around in a session variable from page to page. Then, apply the Dynamic SQL against that session count rather than having the table.
Best Answer
You need to try following by yourself multiple times to see if you are getting any improvement in performance with the correct result.
I tried different approaches and Following is giving good performance in my case.
A Rowid is made by
OOOOOOFFFBBBBBBRRR
O is object id,F is file id,B is block id,R is row number. Starting from 11g some new mechanism is introduced. If the next rowid is in the same block with the current row, then the current block can be reused. Hence, "Consistent gets" increase.I hope you will get direction in solving your issue.
Cheers!!