The answer lies in fully understanding the TIMEDIFF
function and what it returns as an integer.
First, let's consider TIMEDIFF
in its native element and what it returns:
mysql> select timediff('0:0:0','1:0:0') as timediff;
+-----------+
| timediff |
+-----------+
| -01:00:00 |
+-----------+
1 row in set (0.00 sec)
Second, what does TIMEDIFF
return in an integer context - done by adding zero:
mysql> select timediff('0:0:0','1:0:0')+0 as timediff2;
+--------------+
| timediff2 |
+--------------+
| 10000.000000 |
+--------------+
1 row in set (0.01 sec)
Now, we use the TIME_TO_SEC
function to return what we actually want:
mysql> select time_to_sec(timediff('0:0:0','1:0:0')+0)/60 as timediff3;
+-----------+
| timediff3 |
+-----------+
| 60.0000 |
+-----------+
1 row in set (0.00 sec)
The integer value returned by TIMEDIFF
is not minutes or seconds, but rather a base-10 representation of the actual time - so 01:00:00 becomes 10000, and 05:05:01 becomes 50501.
Instead of using TIME_TO_SEC
one can also use TIMESTAMPDIFF
instead of TIMEDIFF
and set the units to whatever is desired:
mysql> select timestampdiff(MINUTE,'2012-04-13 0:0:0','2012-04-13 1:0:0') as timediff4;
+-----------+
| timediff4 |
+-----------+
| 60 |
+-----------+
1 row in set (0.00 sec)
Instead of MINUTE
, you can also use FRAC_SECOND
(microseconds), SECOND
, MINUTE
, HOUR
, DAY
, WEEK
, MONTH
, QUARTER
, or YEAR
.
The time functions in MySQL (in this case, version 5.1) are all described in the manual. The online manual also links to other versions (such as MySQL 5.5) as well.
Yes, hacking into the catalog is bad. Reason #1 is that if you upgrade to new version and forget to move the hack, things start breaking. Just running pg_dump and loading to the same version on another instance will also lose the hack. There's also always the chance that a new version of Postgres will change so much that your hack is now not possible and force you to go back and re-engineer.
Overriding with your own function is the correct way to go.
Best Answer
Because the return values can be large or a decimal number.
Double precision will accept a wide range of values
Looking at other numeric types, you only have decimal which will have overhead: you don't know the return scale or precision needed beforehand so it would have to wide
See PG docs