Testing this on my own machine, I get 1351749699.69961 as the result on both PostgreSQL 8.4.10 and 9.2.1.
With PostgreSQL 8.4.10, the result changes according to the session timezone. With 9.2, it does not.
Likely this is the effect of this change: Measure epoch of timestamp-without-time-zone from local not UTC midnight. which is noted as a 9.2 compatibility change.
The suggested workaround in the change notes is to cast the timestamp to a timestamptz first: not just a configuration change.
Personally, I find the 9.2 behaviour more comfortable. The Unix epoch is defined as UTC, so this effectively means that plain timestamp values are interpreted as UTC: which is how I use them anyway.
know that I should migrate my database to utf8 to solve this problem, but for some reasons, I can not do that for the moment.
In my case, I'd rather PostgreSQL saves my string removing characters it can not convert or for example replacing them with some symbol like "?" rather than throwing an error...
PostgreSQL does not support this. It's requested periodically, but nobody who requests it does the work to actually implement it in the system and convince the dev team it's an appropriate option to offer.
You will need to do your text-mangling client-side. In PHP, before you send the text to PostgreSQL, you will need to filter out characters that doesn't match the database encoding. How to do that is entirely PHP-specific (start with iconv support, probably). You have described one way to do this, using utf8_decode
, already.
Using utf8_decode
is actually incorrect, because the function (per the docs) actually assumes the input is ISO-8859-1, i.e. Latin-1. You're using latin-9, i.e. ISO-8859-15. So it'll mangle some of your input characters, in particular the Euro sign. See changes from ISO-8859-1. Instead, use the iconv
function. See the surprisingly useful comments on the utf8_decode
function documentation.
If in the process of filtering the text you convert it to LATIN9 inside PHP, remember that you must set your client_encoding
to latin9, since that's the encoding of the text you'll be sending to PostgreSQL. That means the results will be in latin-9 too, so you must convert all results from PostgreSQL from latin-9 back to PHP's native utf-8.
If you use utf8_encode
to convert your latin-9 output from PostgreSQL for consumption in PHP, you'll have the same problem with latin-1 vs latin-9 as you do on utf8_decode
.
For that reason, if possible, try to use a filter that replaces characters not supported in latin-9 without actually converting the string to latin-9. It'll save you a bunch of hassle if you can keep client_encoding
set to utf-8
and just mangle your strings instead of converting them.
All this said, I strongly recommend upgrading the database to utf-8 instead. The only reason to keep it in latin-9 would be if you have other client applications that can't cope with chars outside the latin-9 range (i.e. they rely on a latin-9 client_encoding
)
Best Answer
It's not documented, yet, but it is certainly supported and should be moving forward. It's actually in the SQL 2011 spec as
type predicate
.