I don't see a lot of opportunity for improvement.
The index you added was probably a big help, because it's being used for the range matching on the WHERE clause (type => range, key => tran_date), and it's being used as a covering index (extra => using index), avoiding the need to seek into the table to fetch the row data.
But since you're using functions to construct the financial_year value for the group by, both the "using filesort" and "using temporary" can't be avoided. But, those aren't the real problem. The real problem is that you're evaluating MONTH(tran_date) 346,485 times and YEAR(tran_date) at least that many times... ~700,000 function calls in one second doesn't seem too bad.
Plan B: I am definitely not a fan of storing redundant data, and I'm dead-set against making the application responsible for maintaining it... but one option I might be tempted to try would be to create a dashboard_stats_by_financial_year table, and use after-insert/update/delete triggers on the transactions1 table to manage keeping those stats current.
That option has a cost, of course -- adding to the amount of time it takes to update/insert/delete a transaction... but, waiting > 1200 milliseconds for stats for your dashboard is a cost, too. So it may come down to whether you want to pay for it now or pay for it later.
The fact that pg_typeof
doesn't show the typmod is frustrating.
To get the fully qualified type you can query the system catalogs. Let's look at how psql
does it using psql -E
:
$ psql -E regress
psql (9.2.1)
Type "help" for help.
regress=> CREATE TABLE typmodtest ( a numeric(16,2), b varchar(32) );
CREATE TABLE
regress=> \d typmodtest
... tons of information printed, including this query which produces the information we want:
SELECT a.attname,
pg_catalog.format_type(a.atttypid, a.atttypmod),
(SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128)
FROM pg_catalog.pg_attrdef d
WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),
a.attnotnull, a.attnum,
(SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t
WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND a.attcollation <> t.typcollation) AS attcollation,
NULL AS indexdef,
NULL AS attfdwoptions
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = '24641' AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
This query can clearly be trimmed if you're not interested in the field collation, NOT NULL
status, etc. We can also change the attrelid
filter from an oid to a regclass filter:
SELECT a.attname,
pg_catalog.format_type(a.atttypid, a.atttypmod)
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = 'typmodtest'::regclass
AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
producing:
attname | format_type
---------+-----------------------
a | numeric(16,2)
b | character varying(32)
This will only work on relations. I'm not aware of a reliable, generic way to get the typmod-qualified type for any result set, and it's a serious irritation. Among other things it makes it hard for the PostgreSQL JDBC driver to be compliant.
Best Answer
Within MySQL, you can get this information by creating a temporary table, then using
DESCRIBE
on that temporary table:We cannot just use
DESCRIBE
on the original table because the column we want to know the type of is a calculated column, not something being directly pulled from a table. Likewise, we cannot useDESCRIBE
directly on the query, becauseDESCRIBE
can only be used on tables. Creating a temporary table solves both of those problems.