What is interesting about this situation can be solved using the MyISAM storage.
I answedred a question like this back in April 2012 : How can you have two auto-incremental columns in one table? You need to create one table whose sole purpose is the create sequences of work order for each site
CREATE TABLE site_workorder_seq
(
SiteID int not null,
SiteWorkorderNum int not null auto_increment,
PRIMARY KEY (SiteID,SiteWorkorderNum)
) ENGINE=MyISAM;
Here is a sample loading into this table:
mysql> DROP DATABASE david;
Query OK, 1 row affected (0.01 sec)
mysql> CREATE DATABASE david;
Query OK, 1 row affected (0.00 sec)
mysql> USE david
Database changed
mysql> CREATE TABLE site_workorder_seq
-> (
-> SiteID int not null,
-> SiteWorkorderNum int not null auto_increment,
-> PRIMARY KEY (SiteID,SiteWorkorderNum)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.01 sec)
mysql> INSERT INTO site_workorder_seq (SiteID) VALUES
-> (1),(1),(2),(3),(3),(3),(3),(4),(4),(4),
-> (5),(5),(4),(2),(2),(2);
Query OK, 16 rows affected (0.00 sec)
Records: 16 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM site_workorder_seq;
+--------+------------------+
| SiteID | SiteWorkorderNum |
+--------+------------------+
| 1 | 1 |
| 1 | 2 |
| 2 | 1 |
| 2 | 2 |
| 2 | 3 |
| 2 | 4 |
| 3 | 1 |
| 3 | 2 |
| 3 | 3 |
| 3 | 4 |
| 4 | 1 |
| 4 | 2 |
| 4 | 3 |
| 4 | 4 |
| 5 | 1 |
| 5 | 2 |
+--------+------------------+
16 rows in set (0.00 sec)
mysql>
Let's look at the the last WorkorderNum from each site
mysql> SELECT SiteID,MAX(SiteWorkorderNum) SiteWorkorderNum
-> FROM site_workorder_seq GROUP BY SiteID;
+--------+------------------+
| SiteID | SiteWorkorderNum |
+--------+------------------+
| 1 | 2 |
| 2 | 4 |
| 3 | 4 |
| 4 | 4 |
| 5 | 2 |
+--------+------------------+
5 rows in set (0.05 sec)
mysql>
Now, suppose you want to get the next SiteWorkorderNum for SiteID 3. You could do this:
INSERT INTO site_workorder_seq (SiteID) VALUES (3);
SELECT MAX(SiteWorkorderNum) INTO @nextworkordernum
FROM site_workorder_seq WHERE SiteID=3;
SELECT @nextworkordernum;
Let's run this and see what happens
mysql> INSERT INTO site_workorder_seq (SiteID) VALUES (3);
Query OK, 1 row affected (0.00 sec)
mysql> SELECT MAX(SiteWorkorderNum) INTO @nextworkordernum
-> FROM site_workorder_seq WHERE SiteID=3;
Query OK, 1 row affected (0.00 sec)
mysql> SELECT @nextworkordernum;
+-------------------+
| @nextworkordernum |
+-------------------+
| 5 |
+-------------------+
1 row in set (0.03 sec)
mysql> SELECT * FROM site_workorder_seq;
+--------+------------------+
| SiteID | SiteWorkorderNum |
+--------+------------------+
| 1 | 1 |
| 1 | 2 |
| 2 | 1 |
| 2 | 2 |
| 2 | 3 |
| 2 | 4 |
| 3 | 1 |
| 3 | 2 |
| 3 | 3 |
| 3 | 4 |
| 3 | 5 |
| 4 | 1 |
| 4 | 2 |
| 4 | 3 |
| 4 | 4 |
| 5 | 1 |
| 5 | 2 |
+--------+------------------+
17 rows in set (0.00 sec)
mysql> SELECT SiteID,MAX(SiteWorkorderNum) SiteWorkorderNum
-> FROM site_workorder_seq GROUP BY SiteID;
+--------+------------------+
| SiteID | SiteWorkorderNum |
+--------+------------------+
| 1 | 2 |
| 2 | 4 |
| 3 | 5 |
| 4 | 4 |
| 5 | 2 |
+--------+------------------+
5 rows in set (0.00 sec)
mysql>
As long as you use this one MyISAM apart from all InnoDB tables, you can generate workordernums per site to your hearts content.
I don't see a lot of opportunity for improvement.
The index you added was probably a big help, because it's being used for the range matching on the WHERE clause (type => range, key => tran_date), and it's being used as a covering index (extra => using index), avoiding the need to seek into the table to fetch the row data.
But since you're using functions to construct the financial_year value for the group by, both the "using filesort" and "using temporary" can't be avoided. But, those aren't the real problem. The real problem is that you're evaluating MONTH(tran_date) 346,485 times and YEAR(tran_date) at least that many times... ~700,000 function calls in one second doesn't seem too bad.
Plan B: I am definitely not a fan of storing redundant data, and I'm dead-set against making the application responsible for maintaining it... but one option I might be tempted to try would be to create a dashboard_stats_by_financial_year table, and use after-insert/update/delete triggers on the transactions1 table to manage keeping those stats current.
That option has a cost, of course -- adding to the amount of time it takes to update/insert/delete a transaction... but, waiting > 1200 milliseconds for stats for your dashboard is a cost, too. So it may come down to whether you want to pay for it now or pay for it later.
Best Answer
There are multiple ways to achieve that
Repeat the expressions in the new column definition:
Wrap it in another query: