I don't see a lot of opportunity for improvement.
The index you added was probably a big help, because it's being used for the range matching on the WHERE clause (type => range, key => tran_date), and it's being used as a covering index (extra => using index), avoiding the need to seek into the table to fetch the row data.
But since you're using functions to construct the financial_year value for the group by, both the "using filesort" and "using temporary" can't be avoided. But, those aren't the real problem. The real problem is that you're evaluating MONTH(tran_date) 346,485 times and YEAR(tran_date) at least that many times... ~700,000 function calls in one second doesn't seem too bad.
Plan B: I am definitely not a fan of storing redundant data, and I'm dead-set against making the application responsible for maintaining it... but one option I might be tempted to try would be to create a dashboard_stats_by_financial_year table, and use after-insert/update/delete triggers on the transactions1 table to manage keeping those stats current.
That option has a cost, of course -- adding to the amount of time it takes to update/insert/delete a transaction... but, waiting > 1200 milliseconds for stats for your dashboard is a cost, too. So it may come down to whether you want to pay for it now or pay for it later.
My problem has been solved. Thanks for all your replies and comments.
The success seems to come from a couple of things including:
- Indexing
visits (excluded, website_id, time)
- Upping
innodb_buffer_pool_size
to 8G
- Upping
sort_buffer_size
, read_buffer_size
and join_buffer_size
to 8M
- Possibly adding
thread_cache_size = 4
which was suggested from mysqltuner.pl
The overall server memory usage has gone up about 3G (which probably relates to the 3G increase in innodb_buffer_pool_size
, but I will just have to compensate for that.
Best Answer
The 16.04 seconds is a timer on the client that measures the amount of time that passed between the call and the execution of the query (if you want to be more specific than that, the wall clock time between calling start_timer() and mysql_end_timer(), something that can lead to hilarious results like this one I got).
The reason that you think that it took you more than 30 minutes to execute is probably because it doesn't have into account the output to the stdout. You can check that it really takes 16 second by doing:
If you want to measure how much time takes to write to the standard output, you can do:
Note: Are you sure you want to print a 0 or a 1 for ALL rows from that table? Maybe the option
--safe-updates
and/or using a GUI can help you a bit with your queries?