After a pg_dump
from DB1 and a restore to DB2 (with psql -f
, most likely), ALTER SCHEMA
will take care of all internal reverences. Obviously, any code you store outside the database must be updated - you mentioned that already.
What might be less obvious: functions bodies have to be updated manually, if there should be any references to the schema name. Can only include functions you wrote yourself.
There is also less common stuff like external references via dblink
, including self-references, that would need to be adapted. You'll know about it if you are using it.
Other than that, the search_path
must match your respective setups in the two databases. This concerns general settings in postgresql.conf
, but also settings per DATABASE
or per ROLE
and possibly options supplied with the connection from your apps.
Then you should be good to go.
OR, and that's what I would do, you hack the dump. After you dump from DB1, replace all occurrences of the old schema name with an editor of your choice. Then run the edited SQL script via psql -f
to restore to DB2. All done.
Of course, if the name is not unique across the whole dump, you will have to be careful to replace only relevant strings.
If your setup is huge and the schema name is not very selective, you might want to restore to DB2 as is, ALTER SCHEMA
, pg_dump
and delete from DB2 again, and do the rest in your editor before you restore for good.
Your trigger runs after the updates have occurred on the table. If you update the table again in the trigger a new update is performed. See also the discussion about RECURSIVE_TRIGGERS.
If you want to run code before the update you will have to create an INSTEAD OF trigger.
Whether having to do two updates instead of one will become the bottleneck is entirely subject to the efficiency of the update in your trigger, and of course whether this UPDATE is already on the critical path or not. You probably have a primary key on job id, the pages are hot in buffer pool and you cannot have lock conflicts. So basically is only subject to a) how fast can your log accept writes and b) whether the version store (which serves the deleted and inserted pseudo-tables) can keep up.
If you ask me: such business logic status housekeeping belongs in the application, not in a trigger...
Best Answer
There are couple of ways to deal with this using triggers
FDW: depending on the version of PG can be fast or extremely slow.
PLpython requires python language with psycopg library to be installed.
I like to use plpython as its easier to add logic if the connection between the two servers becomes broken, When the connection is broken use another table to hold onto the pending changes, then use PgAgent task to retry updating the data.