Can you add indexes? I would first try adding an index on (col, userid)
.
10 seconds looks too much for a 30M rows table.
Here is your original query from the question
SELECT e.*, MAX(m.datetime) AS unread_last, COUNT(m.id) AS unread
FROM TAB_EVENT e
LEFT JOIN TAB_MESSAGE m ON e.id=m.event_id
WHERE ( m.`read` IS NULL OR m.`read` = 0)
GROUP BY e.id
ORDER BY m.datetime DESC, e.id ASC
LIMIT 10;
Maybe try refactoring the query in such a way that in executes in this sequence
- only collect necessary columns from TAB_MESSAGE
- apply LIMIT 10 against the collected rows from TAB_MESSAGE
- run the JOIN
- apply the MAX() and COUNT() last
Here is what I am proposing
SELECT e.*, MAX(m.datetime) AS unread_last, COUNT(m.id) AS unread
FROM
(
SELECT * FROM
(SELECT id,event_id,datetime FROM TAB_MESSAGE
WHERE read IS NULL OR read = 0
ORDER BY datetime DESC) mm
LIMIT 10
) m
LEFT JOIN TAB_EVENT e
ON e.id=m.event_id
ORDER BY m.datetime DESC, e.id ASC;
Give it a Try !!!
UPDATE 2012-02-21 17:06 EDT
SELECT e.*, MAX(m.datetime) AS unread_last, COUNT(m.id) AS unread
FROM
TAB_EVENT e LEFT JOIN
(
SELECT * FROM
(SELECT id,event_id,datetime FROM TAB_MESSAGE
WHERE read IS NULL OR read = 0
ORDER BY datetime DESC) mm
LIMIT 10
) m
ON e.id=m.event_id
ORDER BY m.datetime DESC, e.id ASC;
@Sebastian, I put the query back in the original join order. Please try this as well !!!
UPDATE 2012-02-21 17:11 EDT
Make sure the datetime field is indexed
ALTER TABLE TAB_MESSAGE ADD INDEX read_datetime_ndx (read,datetime);
Best Answer
You can test this variation. In theory, it would use an
(id, value)
index to find the min and max and would not have to count the distinct values at all: