Sql-server – Tables shared information

sql server

I would like to have a table pattern on my db. I.e. every column should have

  • Record_Insert_Date
  • Record_LastChange_Date
  • Record_User_Insert
  • Record_User_Update

The obvious approach is to add these columns on each tables, but is not so easy to mantain. I would like to have this informations shared across db in a single table. Schema of this table should be somthing like

  • FOREIGN_TABLE_ID
  • FOREIGN_ROW_ID
  • Record_Insert_Date
  • Record_LastChange_Date
  • Record_User_Insert
  • Record_User_Update

The questions are

  • how to join with other tables (two possible options are)
    • 1) Use sys.tables table id as FOREIGN TABLE ID. May change? Should be dangerous in case of db refactory? Performance of query on sys.tables to retreive table name to join dynamically?
    • 2) Add an extra field each table on db to have an integer code that identify it. I.e tableA.TABLE_ID=1, tableB.TABLE_ID=2, …, tableXX.TABLE_ID=N. Using this approach a simple join is enough to match record. Difficult to use is a table have a multiple key.
    • 3) REVERSE OF 2) Add an extra field each table on db to have an FK to shared table. In this case shared table have an IDENTITY() key instead of (FOREIGN_TABLE_ID, FOREIGN_ROW_ID)
  • performance of each approach?

Best Answer

I have a couple of questions:

1) Do you need to keep a detailed history of changes? Or only who/when was the record created and who/when last updated it?

2) Assuming no detailed history is needed, the main question I have is whether your alternate solution, of using a separate tracking table(s), will be any easier to maintain than adding the columns to every table. You will either need to: a) add code into every query/stored procedure that manipulates data to update the tracking table; b) add a trigger to every table that needs to be tracked.

3) How often do you expect to query this data? My experience with this type of auditing is you only need it when something bad has happened.

My impression of your options are:

1) I think using the Object_id for the tables as your FOREIGN TABLE ID is a good idea. Why would this change? Do you expect to drop and recreate tables during deployments? And if you do, you can easily map the old object_id with the newly created one and update the tracking table. You could add this logic to your deployments.

2) Wouldn't this negate your reason for not adding the tracking columns into the main tables?

3) See comment for 2), plus I don't think this would work as well as 2).

As far as performance, if you use a separate tracking table, I think it would be best to keep it as separate from the main tables as possible (option 1).

The biggest potential performance boost I can see by using a tracking table is for querying the data. You will be able to tune the indexes specifically for those queries. Also, all your data will be centralised. If the columns are on the base tables additional indexes will probably be an overall disadvantage. BUT!!! Back to my question 3): How often are you going to need to query this data?

As far as day-to-day performance your real impact will come down to how you implement updates of the tracking table. Changes to the stored procedures would be better than triggers in my opinion.

I'm sorry if I haven't really answered your question, but I hope my comments help.