Basically we would like to create a TRIGGER for each table we want to be notified for an UPDATE/INSERT/DELETE operation. Once this trigger fires it will execute a function that will simply append a new row (encoding the event) to a log table that we will then poll from an external service.
That's a pretty standard use for a trigger.
Before going all in with Postgres TRIGGER(s) we would like to know how they scale: how many triggers can we create on a single Postgres installation?
If you keep creating them, eventually you'll run out of disk space.
There's no specific limit for triggers.
PostgreSQL limits are documented on the about page.
Does they affect query performance?
It depends on the trigger type, trigger language, and what the trigger does.
A simple PL/PgSQL BEFORE ... FOR EACH STATEMENT
trigger that doesn't do anything has near-zero overhead.
FOR EACH ROW
triggers have higher overhead than FOR EACH STATEMENT
triggers. Scaling, obviously, with the affected row counts.
AFTER
triggers are more expensive than BEFORE
triggers because they must be queued up until the statement finishes doing its work, then executed. They aren't spilled to disk if the queue gets big (at least in 9.4 and below, may change in future) so huge AFTER
trigger queues can cause available memory to overrun, resulting in the statement aborting.
A trigger that modifies the NEW
row before insert/update is cheaper than a trigger that does DML.
The specific use case you want would perform better with an in-progress enhancement that might make it into PostgreSQL 9.5 (if we're lucky), where FOR EACH STATEMENT
triggers can see virtual OLD
and NEW
tables. This isn't possible in current PostgreSQL versions, so you must use FOR EACH ROW
triggers instead.
Did anyone before tried this ?
Of course. It's a pretty standard use for triggers, along with auditing, sanity checking, etc.
You'll want to look into LISTEN
and NOTIFY
for a good way to wake up your worker when changes to the task table happen.
You're already doing the most important thing by avoiding talking to external systems directly from triggers. That tends to be problematic for performance and reliability. People often try to do things like send mail directly from a trigger, and that's bad news.
Why do you expect B to be updated, when you inserted a row with a different id? Actually, your INSERT ALL
does not insert anything into B
.
The above works fine after fixing your trigger and sample data:
create or replace trigger updB
after update on B
for each row
declare
v varchar2(5);
begin
v := :new.val;
end;
/
This is how you reference the new val in the trigger, not with a select. Your original trigger would run into the infamous ORA-04091
"table is mutating" error.
SQL> insert all into A values ('a', 'false') into b values ('a', 'false') select * from dual;
2 rows created.
SQL> commit;
Commit complete.
SQL> update A set val = 'true';
Cek cek: false
1 row updated.
SQL> commit;
Commit complete.
SQL> select * from a;
ID VAL
---------- ----------
a true
SQL> select * from b;
ID VAL
---------- ----------
a true
For the output:
Your updA
trigger runs in an autonomous transaction. When you select val from A
, your original update is still not finished (commited), you will see the old value. Fixing your trigger for A
:
create or replace trigger updA
after update on A
for each row
declare
pragma autonomous_transaction;
cek varchar2(5);
begin
commit;
cek := :new.val;
dbms_output.put_line('Cek cek: ' || cek);
update B set val = :new.val where id = :new.id;
commit;
end;
/
Best Answer
If you want to pass data to a trigger that is not provided by one of the predefined variables you could put that data in a temp table before you perform the INSERT/UPDATE/DELETE that will fire the trigger. Then you can retrieve the data from the table inside the trigger function. Temp tables are local to sessions so this is safe to do when multiple clients connect to the same database.