Keep in mind most of the work I do is with PostgreSQL so this may or may not be 100% on the money but I think this should be close enough to be helpful.
The basic issue is that in an environment like this you are going to have a great deal of trouble managing the locks. It seems you can either do some sort of conflict resolution or conflict prevention on the lock level. Conflict prevention seems like despite the performance difficulties, it would reduce the level of user frustration significantly.
My approach here would be indeed to do the locking on the central server in a pl/sql stored procedure, which would insert into the locking table if possible, returning a value indicating success, or if that is not possible, either returning a value indicating failure, or raising an exception (what I have done in the past, for example, is to return something identifying who has the lock if it is already locked).
What I would omit is actually checking with the local server. If you have a high read to write ratio and the chance of collisions is otherwise relatively small, you are going to have to check the central server most times anyway, so there isn't much to be gained by checking the local server also. Certainly you wouldn't want to write to both the local and remote servers for locking. Keep locking simple. It is likely to be a source of pain no matter what you do.
The second thing I would suggest here is that I highly recommend expiring locks like this, perhaps after 2 hours or something. There are two major reasons for doing so. The first is that bugs in the application-layer of the code can cause locks not to be released, and secondly if this is over a web interface, HTTP is stateless and therefore you have no real way of knowing that state has dropped. In this way you can give a lock which is valid for a certain period of time, can be renewed pre-emptively (in the background if needed), and times out if the individual closes the browser window and goes home for the day. Some sort of administrative utility to free locks is also something I would recommend.
I share your sense that 7 sec to acquire a lock is a significant performance cost, but in the end I don't see any better way to do this. Your options are limited significantly by the CAP theorem and a single central locking system is probably what's needed.
I suppose another option is that one could have the central server merely lock to branch location, and have the branch location release the lock as soon as no valid locks have been held for a certain period of time. This might have the advantage of allowing faster collaboration by a team, meaning that only the first editor on a team would have to incur that cost.
It's the other way around. Active Data Guard is included if you license GoldenGate on Oracle Database Enterprise Edition.
The GoldenGate product page on Oracle Technology Network states:
The license for Oracle GoldenGate includes a full use license for
Oracle Active Data Guard, and a full use license for XStream in the
Oracle Database.
Best Answer
Short answer is 'No', you can't reliably prove this without lower level details. Auditing and Triggers will work from the database layer. If you have access to the GoldenGate Trail files (files with all the extracted changes), you can see exactly what GoldenGate will replicate by using
logdump
(another binary that comes with OGG).It sounds like you have bi-directional replication (active-active back). If so, your Extract process should be configured to ignore your OGG user, since you don't want to replicate changes back to the source.
Another thing to considers whether or not triggers are enabled on your unused/secondary site. I have seen weird things happen when triggers are not disabled on the receiving side (duplicated history records, unique key violations, etc).
Also, if your secondary site is unused, maybe DataGuard is a better solution. It all depends on your software and infrastructure architecture. It won't be a bi-directional solution, but arguably a simpler solution than OGG.