I cleaned up and simplified a few minor details. That shouldn't change much for performance, yet. But the added SET join_collapse_limit = 1
might:
CREATE OR REPLACE FUNCTION public.ap_keepalive_geteqpid_veiid(tcbserie bigint, protocolo int)
RETURNS TABLE(eqpid int, veiid int, tcbid int
, veiplaca varchar, veiproprietariocliid int, tcbtppid int, tcbversao character, veirpmparametro double precision
, tcbconfiguracao bigint, tcbevtconfig int, veibitsalertas int, sluid int, harid int) AS
$func$
BEGIN
RETURN QUERY
SELECT eqp.eqpID
, eqp.eqpveiID AS veiID
, cb.tcbID
, vei.veiPlaca
, vei.veiProprietariocliID
, cb.tcbtppID
, cb.tcbVersao
, vei.veiRPMParametro
, CASE WHEN cb.tcbConfiguracao = 0 THEN COALESCE(cc.clcConfiguracaoBitsVeic, 0)
ELSE cb.tcbConfiguracao END -- AS tcbConfiguracao
, COALESCE(cb.tcbevtConfig, 0) -- AS tcbevtConfig
, COALESCE(vei.veiBitsAlertas, 0) -- AS veiBitsAlertas
, COALESCE(vei.veisluID, 0) -- AS sluID
, COALESCE(cb.tcbharID, 0) -- AS harID
FROM TabEquipamento eqp
JOIN TabVeiculos vei ON vei.veiID = eqp.eqpveiID
JOIN TabComputadorBordo cb ON cb.tcbID = eqp.eqptcbID
JOIN TabPacoteProduto pp ON pp.tppID = eqp.eqptppID
LEFT JOIN TabCliente cli ON cli.cliid = vei.veiProprietariocliID
LEFT JOIN TabClienteConfig cc ON cc.clcCliID = cli.cliID
WHERE eqp.eqpAtivo = 1
AND vei.veiBloqueioSinal = 0
AND cb.tcbserie = $1
AND pp.tppIDProtocolo = $2;
END
$func$ LANGUAGE plpgsql VOLATILE STRICT COST 10000 ROWS 1
SET join_collapse_limit = 1; -- see below!
Notes
COALESCE
can take multiple parameters, no need to nest:
COALESCE(NULLIF(cb.tcbConfiguracao, 0), cc.clcConfiguracaoBitsVeic, 0) AS tcbConfiguracao
The CASE
expression I ended up using above should be a bit faster, yet.
character
is suspicious as data type. It's the same as char(1)
, I assume you are aware of that.
Inside PL/pgSQL, column aliases that are not referenced in the same query are just for documentation. Only the names in the RETURNS TABLE
clause are visible outside the function.
COST 100
is the default for a user-defined function and probably way off for your case. 10000 is probably a better estimate, but unless you nest this function in an outer query, this has hardly any effect.
I removed the parameter defaults (as discussed) and made the function STRICT
, since the query would return nothing for any NULL input anyway.
I simplified your table aliases and formatted some more to make it easier to read and play with. That last bit is largely a matter of taste and style.
Answer to question
As to your actual question: There are no (direct) hints for the query planner (optimizer) in PostgreSQL like there are in other RDBMS. Details in the Postgres Wiki on "OptimizerHintsDiscussion".
PL/pgSQL works with prepared statements internally. It will re-plan queries inside the function body with the given input parameters for the first couple of calls per session. Only if it finds that specific plans don't perform better than a generic plan it will then switch to a generic plan and keep that, which saves some overhead.
Details:
However, there are a couple of settings you can adjust. In particular, if you know the best query plan, you can force Postgres to take the order of joins in the FROM
clause as given and not try to reorder (which can get expensive for many tables - and you have 6 of them), by setting the join_collapse_limit
. This will reduce the cost of planning the query. If done right, it will make the first couple of calls faster. If you mess it up, performance will suffer, of course.
You could put a SET LOCAL
as first command:
...
BEGIN
SET LOCAL join_collapse_limit = 1;
RETURN QUERY ...
Better yet, declare it as an attribute of the function itself like I did above. The effect of SET LOCAL
in the function body would last till the end of the transaction, but, per documentation:
The SET
clause causes the specified configuration parameter to be set
to the specified value when the function is entered, and then restored
to its prior value when the function exits.
Obviously, you need to get the sequence of joins in your FROM
clause right yourself. And it has to be good for all possible combinations of parameters. Postgres will not try to optimize. (The STRICT
modifier simplifies a bit, because NULL values are excluded now.)
Put tables with the most selective predicates first.
WARNING: Be aware that such optimizations may turn from helpful to obstructive after upgrades to your Postgres version or any major changes in your DB.
Related answer with more details:
Aside
I assume you are aware that there are a couple other possible effects that can make the first call slower than subsequent calls. Like populating cache memory and other things:
The problem is that when an area is merged and its code is reused, you have that information, but when an area is merged and its code is deprecated, you don't have it. In other words, you don't have the information that an area is deprecated.
Now, I see two solutions. First, keep the schemas as is and whenever an area is deprecated, add a new row as it was a new area and just states it is deprecated, e.g.
INSERT INTO myTable(code, area, change_date)
VALUES(86001, 'deprecated', '2015-01-01')
So in your query it will show as 'deprecated'
or you filter it out.
A second, and possible best option, is to have a new column that states when (and if) an area was deprecated:
ALTER TABLE myTable ADD deprecated DATE;
UPDATE myTable SET deprecated = '2015-01-01' WHERE gid = 1;
So, on your query you can simple add the filter:
(deprecated IS NULL OR deprecated >= '2016-01-01')
The full code:
SELECT DISTINCT ON (code) code, area, change_date
FROM myTable
WHERE change_date < '2016-01-01'
AND (deprecated IS NULL OR deprecated >= '2016-01-01')
ORDER BY code, change_date DESC;
As another option for non-deprecated areas, you can set them as 'infinity'
instead of NULL
, so the query does not need the deprecated IS NULL
condition, as 'infinity'
will always be higher or equal than any other non-null value.
Best Answer
If you had permission to kill them with
pg_cancel_backend
, but doing so did not work, then they must be in a state where they are not responding to interrupts. That is probably a bug, either in PostgreSQL itself, or in your code, or in pldbgapi. What does wait_event_type and wait_event say? Can you attach to them with gdb and get backtraces?