Keep in mind most of the work I do is with PostgreSQL so this may or may not be 100% on the money but I think this should be close enough to be helpful.
The basic issue is that in an environment like this you are going to have a great deal of trouble managing the locks. It seems you can either do some sort of conflict resolution or conflict prevention on the lock level. Conflict prevention seems like despite the performance difficulties, it would reduce the level of user frustration significantly.
My approach here would be indeed to do the locking on the central server in a pl/sql stored procedure, which would insert into the locking table if possible, returning a value indicating success, or if that is not possible, either returning a value indicating failure, or raising an exception (what I have done in the past, for example, is to return something identifying who has the lock if it is already locked).
What I would omit is actually checking with the local server. If you have a high read to write ratio and the chance of collisions is otherwise relatively small, you are going to have to check the central server most times anyway, so there isn't much to be gained by checking the local server also. Certainly you wouldn't want to write to both the local and remote servers for locking. Keep locking simple. It is likely to be a source of pain no matter what you do.
The second thing I would suggest here is that I highly recommend expiring locks like this, perhaps after 2 hours or something. There are two major reasons for doing so. The first is that bugs in the application-layer of the code can cause locks not to be released, and secondly if this is over a web interface, HTTP is stateless and therefore you have no real way of knowing that state has dropped. In this way you can give a lock which is valid for a certain period of time, can be renewed pre-emptively (in the background if needed), and times out if the individual closes the browser window and goes home for the day. Some sort of administrative utility to free locks is also something I would recommend.
I share your sense that 7 sec to acquire a lock is a significant performance cost, but in the end I don't see any better way to do this. Your options are limited significantly by the CAP theorem and a single central locking system is probably what's needed.
I suppose another option is that one could have the central server merely lock to branch location, and have the branch location release the lock as soon as no valid locks have been held for a certain period of time. This might have the advantage of allowing faster collaboration by a team, meaning that only the first editor on a team would have to incur that cost.
@Chris Aldrich has given a good explanation. I will just add a few things here.
1) There is no concept of a "database user" in DB2. All authentication happens outside the database or instance, in the operating system. Also, there is no direct relationship between a user ID and a schema name, unlike in Oracle. In DB2 a schema is just a logical grouping of objects, it does not have any special security features. Any user can create any schema. For example, while logged in as mustaccio I run the statement create table foo (id int...)
, and that creates a schema MUSTACCIO (if it's not already there) and a table in it. As you see, the schema name resolution defaults to my authorization ID. However, I might as well run the statement create table alok.foo (id char(3)...)
, which in this case creates a schema ALOK and the table in it. mustaccio will be the owner of both tables.
2) Regarding the user ID mapping, I would probably say that the DAS owner dasusr1 and the fenced user db2fenc1 do not map to anything in an Oracle database. The instance owner db2inst1 maps to the oracle user ID. Whoever creates a database (might be db2inst1 or some other user authorized to do that by membership in the SYSADM group, for example) obtains DBADM and SECADM privileges in that database, which is somewhat similar to being system and/or sys (I'm not really sure what is the distinction between the two in an Oracle database). If you need a functional ID that owns database objects, you create appowner in the operating system, grant to it appropriate permissions, and connect to that user when creating objects. Similarly, you create appuser in the operating system, grant object access privileges to it, and let your application connect as that user.
3) Since there are no database users in DB2, you cannot drop a user. To delete objects in a particular schema you can use the ADMIN_DROP_SCHEMA() procedure.
Best Answer
It simply means a PDB looks like a normal database instance. When you log into a PDB you only see the users and their database objects which belong to that PDB. The CDB is the actual sever instance, and it has the objects it needs for maintaining the server. In the CDB you can log in, and the tables there have a new column to mark the records associated with a specific PDB.