Postgresql – Is pg_trigger_depth() bad to use for preventing trigger cascading (recursion)

postgresqltrigger

Why is pg_trigger_depth() = 0 bad to use (for anything other than debugging) when preventing trigger cascading (recursion)?

Can someone provide code to demonstrate why it is bad?

I am guessing because if multiple triggers are working on the same data at the same time a condition that stops a trigger using pg_trigger_depth() = 0 will stop any trigger that is second in line to do work.

I thought it would be a good solution to this (my) question here, but I am told otherwise:

Thought it would make a good question.

It is offered here as a solution:

Postgres 9.3 documentation:

https://www.postgresql.org/docs/9.3/static/functions-info.html

Best Answer

Yes, it's always bad to make behavior dependent on pg_trigger_depth()

Maybe I'm a little less averse to blanket statements, but what good could it do? There is no argument I can see as to why you would want such a feature. The primary purpose of a database is to ensure data integrity. And as far as I can see, and I may be wrong, such a use always is a potential violation of data-integrity. In @Erwin's answer he says "if you forget". The point of ensuring integrity is to eliminate that possibility. If an agent can remember everything and understand the consequences and code around them, you can get data integrity out of anything.

Let's introduce a few terms, in programming, we have

  • "state" which includes anything a programmer has access to.
  • "execution context" which includes the environment for execution.

We further have a term for a function, that has no state we call that a pure function,

The function always evaluates the same result value given the same argument value(s). The function result value cannot depend on any hidden information or state that may change while program execution proceeds or between different executions of the program, nor can it depend on any external input from I/O devices (usually—see below).

The distinction for purity is useful because it eliminates any burden to remember anything on behalf of the computer, or the programmer: f(x) = y is always true. Here you're violating purity in the worst place -- the database. And, you're doing it in a fashion that is complex and error prone -- making internal execution context a palpable thing in your DB application's state.

I wouldn't want that. Just consider the complexities you have to buffer mentally.

  • Table A 's Trigger A fires
    • Trigger A always issues DML to Table B, firing Trigger B.
      • Trigger B conditionally issues DML to Table A, firing Trigger A.
      • Trigger B always issues DML to Table A, firing Trigger A.
    • Trigger A conditionally issues DML to Table B, firing Trigger B.
      • Trigger B conditionally issues DML to Table A, firing Trigger A.
      • Trigger B always issues DML to Table A, firing Trigger A.

If that looks complex, keep in mind that "conditionally" there can be further expanded to "it happened, but it may not always happen", and "it didn't happen, but it may happen elsewhere."

For pg_trigger_depth() to even be useful, you have to have a series of events that is similarly complex. And, now, you want execution to be dependent on execution content in that chain?

I would avoid this. If your trigger system is this complex, you're playing hot potato with a hand grenade in a closet all by yourself and it doesn't seem likely to end well.