While there is no such command as UPDATE DELAYED
, you can create this mechanism in a rather strange way. Please follow my rather crazy train of thought...
SCENARIO : Let's say you have just a single DB Server called M1
- You want to fire UPDATEs at a single table called
mydb.timing
- You don't want UPDATEs to log jam the table
SOLUTION : Serialize UPDATEs
MECHANISM : Would you believe MySQL Replication ?
That's right, I said MySQL Replication.
You are probably saying right now
How in the world would having a Slave create UPDATE DELAYED ?
Here is how you do it:
STEP 01 : Using another DB Server (call it M2), create a MySQL Instance with binary logging enabled.
STEP 02 : mysqldump all user databases with no data, just structures (called it MySQLSchema.sql)
STEP 03 : Edit MySQLSchema.sql, change all ENGINE=MyISAM
into ENGINE=BLACKHOLE
STEP 04 : Edit MySQLSchema.sql, change all ENGINE=InnoDB
into ENGINE=BLACKHOLE
STEP 05 : Run source MySQLSchema.sql
on M2.
STEP 06 : Run RESET MASTER;
on M2
STEP 07 : Add replicate-do-table=mydb.timing
to /etc/my.cnf
on M1
STEP 08 : service mysql restart
on M1
STEP 09 : Setup replication from M2 to M1
STEP 10 : In your application, fire your code
set @value=4;
set @name="myAverageValue";
UPDATE `mydb.timing` SET
`max` = greatest(`max`,@value),
`average` = `average` + ((@value - `average`) / (`count` + 1)),
`count` = `count` + 1
WHERE `name` = @name
only at M2
That's it. No locking can ever happen on M2 since mydb.timing
on M2 uses the BLACKHOLE Storage Engine (a.k.a /dev/null Storage Engine). Having MySQL Replication from M2 to M1 forms a queue that serializes UPDATEs. The delay appears in the form of MySQL Replication's asynchronous operation. Replication lag (in this instance only) is totally acceptable since you are looking for UPDATE DELAYED
.
Give it a Try !!!
CAVEAT
This turns MySQL Replication upside down because the production server becomes a Slave to a BlackHole Server. With this strange topology, you can also implement DELETE DELAYED
.
Make sure you include this option
[mysqld]
expire-logs-days=1
on M2. Otherwise, lots of UPDATEs makes for lots of binlogs
This is because you're asking it to skip the first 150000 records. It has to read through all of them first before it gets to the 5 that you actually want:
EXPLAIN SELECT id,category_id,sha FROM incidents LIMIT 5 OFFSET 150000;
If you put an ORDER BY
clause then it's possible that it could jump to end of the index (if it knows how many there are in it) or at least do an index skip scan to jump through it faster (vs sequential scan on the table rows). Either way though there would be more work involved then just getting the first couple records.
To do something like this efficiently you should specify an exact range that you want to query rather than just saying "#150000 - #150005":
SELECT *
FROM my_table
WHERE id BETWEEN 100000 AND 100005
That way the query plan can use an index to jump to that specific sub section of the table immediately.
Best Answer
You have it almost right, by grouping by start it would only group the same date and time, but when you group by time_label, it takes the individual days with out the time
Result
View on DB Fiddle