The application I maintain does something similar that is database OS independent for around 100 users but in a much simpler fashion. Users are authenticated by the application and authorized from the database. Here are the details:
- a table for users including their email and enabled/disabled status
- a table of all AD groups that have a specific prefix to indicate that they are application groups (eg: APP_ADMINS, APP_USERS)
- a table with group/user links
The next part is synchronizing the information. In our organization if new users are added or permissions for existing users are changed IT does the job in the morning. The application typically has a window of low usage overnight (ie: not 24 hours a day/usage).
On the application web server we have a windows service which syncs the groups and users.
On the database at night I run a scheduled job which truncates the group-user links and recreates them by a direct query to Active Directory using DBMS_LDAP.
For your purposes you would want to write a more sophisticated package that has a database package running three LDAP queries to synchronize groups, users and group users instead of my truncate solution.
I found DBMS_LDAP to be powerful but poorly documented. If the IT department should happen to change the OU that users are in you don't get a lot of useful information from the Oracle end.
Tim's site was very useful with this article and here has the code I reused.
Edit: There are two parts to using an application:
- who are you? (authentication) if you do not supply a name and password listed in Active Directory you are not allowed past the log in page.
- what are you allowed to do? (authorization) Here is where the granularity of permissions has a wide range. We have limited ourselves to a basic set of operations on a portion of a unit of work: Read, Update, Assign other users, Unassign other users. Our permissions are not so granular so they can be worded as "User X has select on this table". They can be better said as "User X
can update this portion of the file if they are assigned".
In part this is driven by the application. It creates a pooled connection to the database as required. Users log onto the application but connect to the database as a shared user who has all the permissions required.
To go into a more detailed example:
User X logs in. They have a username and password that matches what is in Active Directory.
After authentication they are taken to the main page. During that time the application queries the database for what permissions this user has and caches them in memory.
The main page loads each sub work area. Each area asks the cached permissions:
- "Do you have authority to see this area?". If so show the work area.
- " Do you have permission to edit this area?" If so show an edit/create link.
Edit2:
The original poster comments that
We are trying to replicate the user permissions onto application to DB
permissions onto database objects.
This sounds like a square peg/round hole problem to me. Your business logic may be defined as User X needs select on this table but if you ask the user they are working with the application. They need to be able to work with whatever the application is displaying. It is common to group tables and views and custom filters to show what the user needs in the application. This is why some applications find it easier to enforce authorization at the application level.
What is it about your application that requires database permissions to be replicated in Active Directory?
In terms of getting the data into the SQL Server database, you can use that ODBC connection directly via the "linked servers" feature (see http://msdn.microsoft.com/en-us/library/ms188279.aspx) - this would allow you to transfer data directly from the remove Oracle instance down to your local SQL Server one with statements as simple as INSERT <localtable> SELECT <stuff> FROM <server>.<database>.<schema>.<table> WHERE <condition>
. I've used this to pull data from both local and remote databases (over VPNs or other tunnels) and found it to be reliable as long as the connectivity between the sites is reliable. It is worth making sure that either your VPN or ODBC setup are performing stream compression: this will increase latency a little but could greatly improve transfer times as your data is most likely to be quite compressible.
Pulling down changes only is going to be something that will depend heavily on the database design.
The best case scenario is if everything is audited so you can easily know which updates have happened since you last picked up changes: check current "last audited action", pull down rows that have been added/changed between that and the last recorded action last time, merge these rows into your local tables, then record the ID/timestamp of the last audited action for next time. The reason to check the audit id/stamp only once at the beginning rather than for each table is to help consistency: you don't have more uptodate data in the tables you transfer last than you do in those you check first. Any other audit/history recording format can be used of course (records holding a "last modified date" and so forth).
If the data has no obvious audit structure like this then things are going to be more convoluted - comparing data in large tables between the two sites is not going to be very efficient. You can reduce the amount of data being transferred by pulling down from each table the primary key column(s) and a hash of all the others, compare the hashes with those generated locally, then request the rows that don't watch locally. You are still going to be transferring something for every row here though (for an UUID PK and a SHA1 hash that is 36 bytes/row plus any padding if transferred in binary form, and it is unlikely to be easily compressible so the VPN can speed it up that way) and if the table contains thinner rows on average than the length of the hash you should transfer the lot as you'll end up doubling (or worse) the transfer with the initial hashes.
You might find you have to use a mix of techniques (updates from audit, checksums for wide tables with no audit, just plain complete transfer for thin tables with no audit) if some of your data is audited and some not.
Another option you might be able to consider (though it requires cooperation from the other end to implement, and may have cost implications as you'll need a local Oracle license) is to use Oracle's equivalent of SQL Server's log shipping method to keep a copy up-to-date on an Oracle instance locally (and move the data from there to SQL Server or Access if that is an explicit requirement rather than just a convenience because that is what you already have installed locally). It is unlikely it that this is possible as an extra for you as it may interfere with the database's primary backup strategy, but if a log shipping equivalent (or differential backups) is being used already for the DB's backup plan then perhaps you could "piggy back" onto the arrangement by having the logs or differential backups sent to you as well as the backup infrastructure. This way the DBMS handles only pushing changes to you (though may push more than by other methods if the same rows are updated many times in the time covered by a given log segment, as you'll get the log of each change to replay not just the final result).
What you are essentially doing with any of the above is emulating replication, so if you have any say on infrastructure or replication is already in use then using that may be an option too.
Best Answer
Check the licensing again, these options became free about 2 years ago.
http://docs.oracle.com/cd/E11882_01/license.112/e47877/options.htm#DBLIC143