In the slony docs they have this third sequence as an example of a structured switchover [http://slony.info/documentation/failover.html#AEN839]:
lock set (id = 1, origin = 1);
move set (id = 1, old origin = 1, new origin = 2);
wait for event (origin = 1, confirmed = 2, wait on=1);
Anyway, I've always used the alt-perl tools with the following, probably well-known, syntax. This moves set2 origin from node 1 to node 2, ie node 2 will become provider.
/usr/local/slony/bin/slonik_move_set set2 1 2 | /usr/local/pgsql/bin/slonik
As you state the script use this sequence of commands: lock set, sync, wait for event, move set. This has worked flawlessly.
What might take some time is the locking and sync. The locking might have to wait for any long running transactions to finish. If the sync is lagging all nodes has to be synchronized when doing a slonik_movet_set.
It seems reasonable to do a sync after the lock as way to confirm the sync between nodes. You could add the sync to your slonik script, ie:
lock set (id = 1, origin = 1);
sync (id = 1);
wait for event (origin = 1, confirmed = 2, wait on=1);
move set (id = 1, old origin = 1, new origin = 2);
wait for event (origin = 1, confirmed = 2, wait on=1);
Also notice id of the confirming node and that you can specify a timeout for the wait for event command, default is 600 sec [http://slony.info/documentation/stmtwaitevent.html]
In my situation, this error was caused by Slony not being installed on every machine in the cluster. I had Slony installed and all the requisite libs on the main host/master, but the clients needed Slony installed (or maybe just the library functions file put in $libdir
) as well.
Best Answer
Slony is trigger based. Since you cannot have triggers on views you can not use Slony to replicate views.
However, you can achieve the desired effect by using Slony to replicate the tables used by your views and duplicate the view definitions on your slave servers.