Basically we would like to create a TRIGGER for each table we want to be notified for an UPDATE/INSERT/DELETE operation. Once this trigger fires it will execute a function that will simply append a new row (encoding the event) to a log table that we will then poll from an external service.
That's a pretty standard use for a trigger.
Before going all in with Postgres TRIGGER(s) we would like to know how they scale: how many triggers can we create on a single Postgres installation?
If you keep creating them, eventually you'll run out of disk space.
There's no specific limit for triggers.
PostgreSQL limits are documented on the about page.
Does they affect query performance?
It depends on the trigger type, trigger language, and what the trigger does.
A simple PL/PgSQL BEFORE ... FOR EACH STATEMENT
trigger that doesn't do anything has near-zero overhead.
FOR EACH ROW
triggers have higher overhead than FOR EACH STATEMENT
triggers. Scaling, obviously, with the affected row counts.
AFTER
triggers are more expensive than BEFORE
triggers because they must be queued up until the statement finishes doing its work, then executed. They aren't spilled to disk if the queue gets big (at least in 9.4 and below, may change in future) so huge AFTER
trigger queues can cause available memory to overrun, resulting in the statement aborting.
A trigger that modifies the NEW
row before insert/update is cheaper than a trigger that does DML.
The specific use case you want would perform better with an in-progress enhancement that might make it into PostgreSQL 9.5 (if we're lucky), where FOR EACH STATEMENT
triggers can see virtual OLD
and NEW
tables. This isn't possible in current PostgreSQL versions, so you must use FOR EACH ROW
triggers instead.
Did anyone before tried this ?
Of course. It's a pretty standard use for triggers, along with auditing, sanity checking, etc.
You'll want to look into LISTEN
and NOTIFY
for a good way to wake up your worker when changes to the task table happen.
You're already doing the most important thing by avoiding talking to external systems directly from triggers. That tends to be problematic for performance and reliability. People often try to do things like send mail directly from a trigger, and that's bad news.
You can use this simpler / cheaper query for your view or CTE or simply a subquery:
SELECT ser_nr, meas_ser, meas_value
, row_number() OVER (PARTITION BY ser_nr ORDER BY meas_ser DESC) = 1 AS last_meas
FROM measurements;
SQL Fiddle.
There are no corner case problems with duplicate or NULL values here while you have that PK constraint you added in the fiddle:
ALTER TABLE measurements ADD CONSTRAINT measurements_pk PRIMARY KEY (ser_nr,meas_ser);
The associated index will also help to make it fast if you only need a small selection of ser_no
from a big table.
And yes, this dynamic approach has so much less potential for headache (consistency in the face of concurrent write access!) than trying to keep rows in your base table current.
Best Answer
I didn't look at the code, but obviously the row is deleted from
grouplayer
before theBEFORE
trigger onlayer
is executed.Perhaps you can perform the task in a
BEFORE
trigger onlayer
.