Keep in mind most of the work I do is with PostgreSQL so this may or may not be 100% on the money but I think this should be close enough to be helpful.
The basic issue is that in an environment like this you are going to have a great deal of trouble managing the locks. It seems you can either do some sort of conflict resolution or conflict prevention on the lock level. Conflict prevention seems like despite the performance difficulties, it would reduce the level of user frustration significantly.
My approach here would be indeed to do the locking on the central server in a pl/sql stored procedure, which would insert into the locking table if possible, returning a value indicating success, or if that is not possible, either returning a value indicating failure, or raising an exception (what I have done in the past, for example, is to return something identifying who has the lock if it is already locked).
What I would omit is actually checking with the local server. If you have a high read to write ratio and the chance of collisions is otherwise relatively small, you are going to have to check the central server most times anyway, so there isn't much to be gained by checking the local server also. Certainly you wouldn't want to write to both the local and remote servers for locking. Keep locking simple. It is likely to be a source of pain no matter what you do.
The second thing I would suggest here is that I highly recommend expiring locks like this, perhaps after 2 hours or something. There are two major reasons for doing so. The first is that bugs in the application-layer of the code can cause locks not to be released, and secondly if this is over a web interface, HTTP is stateless and therefore you have no real way of knowing that state has dropped. In this way you can give a lock which is valid for a certain period of time, can be renewed pre-emptively (in the background if needed), and times out if the individual closes the browser window and goes home for the day. Some sort of administrative utility to free locks is also something I would recommend.
I share your sense that 7 sec to acquire a lock is a significant performance cost, but in the end I don't see any better way to do this. Your options are limited significantly by the CAP theorem and a single central locking system is probably what's needed.
I suppose another option is that one could have the central server merely lock to branch location, and have the branch location release the lock as soon as no valid locks have been held for a certain period of time. This might have the advantage of allowing faster collaboration by a team, meaning that only the first editor on a team would have to incur that cost.
Without a doubt, a new design is needed for step 2. I would suggest that, instead of passing the Database Master Key password, which typically remains fairly constant and is not changed frequently, step 2 would back up the Database Master Key using a randomly generated password of sufficient length. The password can then be encrypted at the source, passed to the destination server encrypted, then decrypted and used to restore the Database Master Key. For this design, you will need to add a procedure, a certificate and a view to the source server msdb database and a procedure and a certificate to the desitination server msdb database. You will also need to create a share folder on the destination server with write permissions granted to the prod server's SQL Agent service account. The objects involved are:
- ProdServer.msdb.vwGetRandomPass - view that generates random long
passwords
- ProdServer.msdb.DMKEncryptionCertificate - certificate to encrypt DMK
password
- ProdServer.msdb.usp_BackupDMK - retreieves a random password from the
view vwGetRandomPass, uses it to back up the DMK to the share,
encrypt the password using the certificate DMKEncryptionCertificate,
return the encrypted password as an output varbinary variable
- TargetServer.msdb.DMKEncryptionCertificate - certificate to decrypt
DMK password restored from a backup of the cert and private key on
the prod server
- TargetServer.msdb.usp_restoreDMK - accepts varbinary parameter
password, decrypts password using DMKEncryptionCertificate, restore
DMK from share using decrypted password, upon successful restore
delete the DMK file
Step 2 would consist of calling the prod server procedure, followed by a call to the TargetServer procedure to complete the restore. You can use a linked server, osql call or other method to call the procedure on the target server.
For even further security, you can drop the private key on the Production Server after backing it up. That way only the destination server can decrypt the password. The password will also be different and unpredictable every day. The other benefit is that the DMK would be deleted every time and would only exist on the share for the duration of step 2, which should be a matter of seconds.
This can be done successfully, however, I would also ask if it should be done. If this data is so sensitive that it needs to be encrypted, then should it be available outside of your production system? If you decide against it, then you could just drop the certificate and symmetric key in the target database and create new ones with the same name to avoid exceptions. Any call using these would return null. I've included the view below:
/*
generates a random 128 character string from all
valid password characters except single quote
*/
CREATE VIEW dbo.vwGetRandomPass
AS
WITH s1
AS (
SELECT TOP 32 CHAR(number) AS chr
FROM master..spt_values
WHERE number BETWEEN 48
AND 57 -- number characters
ORDER BY newid()
)
,s2
AS (
SELECT TOP 32 CHAR(number) AS chr
FROM master..spt_values
WHERE number BETWEEN 65
AND 90 -- Upper letters
ORDER BY newid()
)
,s3
AS (
SELECT TOP 32 CHAR(number) AS chr
FROM master..spt_values
WHERE number BETWEEN 97
AND 122 -- Lower letters
ORDER BY newid()
)
,s4
AS (
SELECT TOP 32 CHAR(number) AS chr
FROM master..spt_values
WHERE number BETWEEN 33
AND 38
OR number BETWEEN 40
AND 47 -- sign characters
OR number BETWEEN 58
AND 64
OR number BETWEEN 91
AND 96
OR number BETWEEN 123
AND 126
ORDER BY newid()
)
,final
AS (
SELECT chr
FROM s1
UNION ALL
SELECT chr
FROM s2
UNION ALL
SELECT chr
FROM s3
UNION ALL
SELECT chr
FROM s4
)
SELECT pass = (
SELECT chr AS [text()]
FROM final
ORDER BY newid()
FOR XML path('')
)
GO
Best Answer
I too am looking for a better way to do this, but what I've found is that you can enable tracing on the client side and then search the tracefile that was created.
I have this in my client-side sqlnet.ora file:
If you're running the client from the same server, you can override TNS_ADMIN to point to a different directory (you may need to copy tnsnames.ora, etc, to that directory as well).