Well, it might not be a complete in depth analysis but, from a developer perspective, I can say it depends on how your server handle the requests and execute the application. To be more clear: synchronously or asynchronously.
If your server handle requests synchronously, than there's nothing to worry. In this case, given two hypothetic users (User A and User B) making requests at the same time, your server will attend only one request first, All the processing for the given hypothetic requests are going to happen in a different moment so there will be no information crossover madness.
But if your application is running in many server instances or in a server that handles requests asynchronously, I wouldn't suggest you to rely the presented method to count rows.
Even in cases when my assumption is not 100% accurate, for a sake of
sanity, in your application, I would suggest you to use the good old
common bulletproof double query solution:
SELECT [fields] FROM [tables] WHERE [filter]
and later our count query...
SELECT count(*) as total FROM [tables] WHERE [filter]
There's no way to go wrong and, as @DTest added, this 2 query approach can be faster in some environments (see the link in his comment).
Oh, and I took a fast read at MySQL documentation related to Information Functions, there it was on a tiny little silly notice:
Important:
FOUND_ROWS() is not replicated reliably, and should not be used with
databases that are to be replicated.
from: http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_row-count
First, from looking at your existing indexes and comparing them to your example query, you are missing an index on just accountid
. The way MySQL handles indexes is left-most, meaning you can have a composite index like this:
KEY `contactaccounts` (`contactid`,`accountid`,`journalcode`) USING BTREE,
and run a query that looks for contactid
and the contactaccounts
index would be a potential index*. However doing a query on accountid
will not utilize the index, because accountid
is not the left-most column.
If you never search for contactid
without an accountid
, I would create the index like this:
DROP INDEX `contactaccounts` ON `tbl_sessions_2012`;
CREATE INDEX `accountscontact` ON `tbl_sessions_2012` (`accountid`,`contactid`,`journalcode`);
Now, analyzing your other indexes, the first of each set is redundant using the left-most rule, and can be dropped in favor of the second:
KEY `contactaccounts` (`contactid`,`accountid`,`journalcode`) USING BTREE
KEY `contactaccountloginmonths` (`contactid`,`accountid`,`journalcode`,`logintypeid`,`actionTime`) USING BTREE
KEY `contactaccountcollections` (`contactid`,`accountcollectionid`,`journalcode`) USING BTREE
KEY `contactaccountcollectionloginmonths` (`contactid`,`accountcollectionid`,`journalcode`,`logintypeid`,`actionTime`) USING BTREE
KEY `organisationaccounts` (`organisationid`,`accountid`) USING BTREE
KEY `organisationaccountloginmonths` (`organisationid`,`accountid`,`journalcode`,`logintypeid`,`actionTime`) USING BTREE
KEY `organisationaccountcollection` (`organisationid`,`accountcollectionid`,`journalcode`) USING BTREE
KEY `organisationaccountcollectionsloginmonths` (`organisationid`,`accountcollectionid`,`journalcode`,`logintypeid`,`actionTime`) USING BTREE
By having the duplicates you are using up a lot of space on indexes.
* I say potential because there is still the possibility that your index will not be used due to the amount of rows MySQL has to scan through. As your account_id IN ()
statement grows, MySQL will determine that it's faster just to do a full table scan regardless of the index.
Best Answer
There are several options to do so.
1# Probably the easiest is to turn on general query logging:
Then you can see all the queries being executed in the file.
2# Another option is to use slow logs with
long_query_time=0
. This you can manage on session level:Slow log also has a bit more information than general log.
3# You can use tcpdump and percona-toolkit to retrieve the queries:
4# Profiling also work
This has the biggest overhead and also in my opinion the most cumbersome.