INSERT INTO stats.cps
(time_stamp,entries,cust_id)
select start_time, count(*) , c_id from gb.str
where start_time between '$DATE10' and '$DATE'
group by start_time, src_id
order by start_time ASC
I had this query on a machine remote to the DB server and start_time
wasn't an index so it was taking a while. I noticed on performance graphs the times it was executing there were spikes of increased load as expected that stopped appearing when I stopped the script that was executing the query.
Another application was misbehaving at those exact times, but that application used the gb.str
table and not stats
db where the data was inserted. The developers of that application think that this query was the reason for the poor performance of their application but gb.str
shouldn't have been locked is my assumption. I ran long SELECT queries on gb.str
to see if that application is getting a performance hit but it doesn't..
Would the query also lock gb.str
to some extent? All the tables are TokuDB that came in with the MariaDB (10.0.22-MariaDB
)
Best Answer
The way I interpret Transactions and Concurrency on the TokuDB wiki
Inserts by the 3rd party application would need to update the keys and they may happen to be locked for certain rows that at the same time are selected-to-be-inserted by that query.
The way I solved it in my case which is make the query run for time slices slightly in the past so it cannot coincide with that application (which actually adds the data).