I would strongly advise against this route. While it seems advantageous to only have 7 stored procedures rather than 28, it will turn into a troubleshooting, tuning, and maintenance nightmare.
I don't know what dbms you are using, but the advice should apply to all of them.
From a troubleshooting and usage analysis standpoint you will have a hard time. Most reporting you get from your dmvs doesn't show the parameters the user submitted to execute the stored procedure. You are going to have to capture them yourself or run traces to see it in real time. This makes it incredibly difficult to determine how these stored procedures are being used, and how the users are using the system. Are they reading from the db more than inserting or deleting? Without digging deep you won't know. If you break them out into different stored procedures you can see which ones are used when and how often.
From a maintainability standpoint you would think that less files would be better, but it actually makes it worse. Now, whenever you want to change something to the insert portion of a stored procedure you are touching the part that inserts, updates and deletes, which means that they are all added to the regression testing scope. It also means that when you deploy new stored procedure code you'll impact people that are utilizing that table no matter what they are doing.
From a tuning standpoint you are actually working against most dbms query optimizers. The optimizers will look at the data in the tables needed and try to determine the best way to retrieve data from them. The issue here is that the operation is not predictiable because your operation relies on parameter sniffing. Also, the strategy of 1 stored procedure to rule the table makes a few assumptions. Mainly that the way that you select, insert, update, and delete data will always be the same. What happens when that changes? You won't want to keep adding because it won't be easy to navigate. It's also hard to determine which indexes to make being that it's extremely difficult to determine how the optimizer is going to react.
From a usability standpoint, it makes things difficult. When you insert you'll need every column passed in as a parameter, but when you select it will only be the id you are filtering on, when you update it may only be the column you want to update and you have to make the query understand this. This normally leads to dynamic sql which is its own nightmare.
The long and short of it is that stored procedures are really good for doing very specific things. The more specific the better most of the time. While this doesn't lead to code reusability and a lot more stored procedures to work with, it makes things a lot easier to troubleshoot, maintain and tune along your applications growth. It is also more intuitive to developers that need to use the database.
If you do want to go this route, the only real way to do it would be if statements trying to sniff our what a user wanted to do based on parameter:
IF @action = 'insert'
INSERT INTO......
I still would not recommend this though.
You can maintain this column easily:
;WITH x AS
(
SELECT case_id, modify_time, version,
rn = ROW_NUMBER() OVER (PARTITION BY case_id ORDER BY modify_date)
FROM dbo.source_table
)
UPDATE x SET version = rn WHERE version <> rn;
But since you could also generate that version column on the fly every time you need to query (maybe it could be in a view), I don't see any benefit of storing the output or even having a version column in the table, since you are potentially going to be updating it more often than you read it (you will have to update entire ranges or even the entire table every time any rows are changed or added).
If you are moving data from one system to another, or one table to another, you still don't need this column to exist in the source table, and you don't need a temporary table to store this column either:
INSERT dbo.target_table(case_id, modify_time, version)
SELECT case_id, modify_time, version = ROW_NUMBER() OVER
(PARTITION BY case_id ORDER BY modify_date)
FROM dbo.source_table;
Best Answer
TIMESTAMP
is easily the worst naming decision Microsoft has made, at least in SQL Server. Not only does the data not have any date or time information, using the name actually violated the ANSI standard.I asked them long ago to deprecate the keyword and always refer to this as
ROWVERSION
. But still the tools and metadata show and script outTIMESTAMP
even when you explicitly created the table usingROWVERSION
. It's horribly misleading.In any case, there is no way to derive date/time information from this column, or to tell how much time has passed when the column information changes by
n
. And I'm not sure how you're going to know whatrecordstamp
to base it off of anyway, if you're trying to compare. In an isolated scenario you can do something like this:But how are you going to do this when it's another session updating the table? And outside of a very controlled environment, you have no idea what's going to happen if this database is backed up and restored elsewhere, fails over, or even when the instance is restarted. The documentation states that it is increasing, but doesn't explicitly state that this is forever, and also warns against using it for tracking date/time in any way.
If modification date is important to you, do it right: add a
datetime2
column to the table and keep it maintained with a trigger.