After a pg_dump
from DB1 and a restore to DB2 (with psql -f
, most likely), ALTER SCHEMA
will take care of all internal reverences. Obviously, any code you store outside the database must be updated - you mentioned that already.
What might be less obvious: functions bodies have to be updated manually, if there should be any references to the schema name. Can only include functions you wrote yourself.
There is also less common stuff like external references via dblink
, including self-references, that would need to be adapted. You'll know about it if you are using it.
Other than that, the search_path
must match your respective setups in the two databases. This concerns general settings in postgresql.conf
, but also settings per DATABASE
or per ROLE
and possibly options supplied with the connection from your apps.
Then you should be good to go.
OR, and that's what I would do, you hack the dump. After you dump from DB1, replace all occurrences of the old schema name with an editor of your choice. Then run the edited SQL script via psql -f
to restore to DB2. All done.
Of course, if the name is not unique across the whole dump, you will have to be careful to replace only relevant strings.
If your setup is huge and the schema name is not very selective, you might want to restore to DB2 as is, ALTER SCHEMA
, pg_dump
and delete from DB2 again, and do the rest in your editor before you restore for good.
You should be able to use triggers and dbilink to sync your data if you're prepared to install plperl on the postgresql end. If you are running version 9.1+ then you could instead try the foreign-data-wrapper functionality. There appear to be tds and fdw wrappers, though I've not used either myself.
Unless I'm missing something, an external queue manager sounds overkill.
Best Answer
you could use the
postgres_fdw
to link the 2 dbs together and execute queries to compare the different tables as if they we're in the same db.see the documentation on postgres_fdw for examples.
Note, this requires that you will be able to connect from one db to the other remotely. and queries might be resource intensive (as you are basically pulling the data of the remote table over the wire)