I would strongly advise against this route. While it seems advantageous to only have 7 stored procedures rather than 28, it will turn into a troubleshooting, tuning, and maintenance nightmare.
I don't know what dbms you are using, but the advice should apply to all of them.
From a troubleshooting and usage analysis standpoint you will have a hard time. Most reporting you get from your dmvs doesn't show the parameters the user submitted to execute the stored procedure. You are going to have to capture them yourself or run traces to see it in real time. This makes it incredibly difficult to determine how these stored procedures are being used, and how the users are using the system. Are they reading from the db more than inserting or deleting? Without digging deep you won't know. If you break them out into different stored procedures you can see which ones are used when and how often.
From a maintainability standpoint you would think that less files would be better, but it actually makes it worse. Now, whenever you want to change something to the insert portion of a stored procedure you are touching the part that inserts, updates and deletes, which means that they are all added to the regression testing scope. It also means that when you deploy new stored procedure code you'll impact people that are utilizing that table no matter what they are doing.
From a tuning standpoint you are actually working against most dbms query optimizers. The optimizers will look at the data in the tables needed and try to determine the best way to retrieve data from them. The issue here is that the operation is not predictiable because your operation relies on parameter sniffing. Also, the strategy of 1 stored procedure to rule the table makes a few assumptions. Mainly that the way that you select, insert, update, and delete data will always be the same. What happens when that changes? You won't want to keep adding because it won't be easy to navigate. It's also hard to determine which indexes to make being that it's extremely difficult to determine how the optimizer is going to react.
From a usability standpoint, it makes things difficult. When you insert you'll need every column passed in as a parameter, but when you select it will only be the id you are filtering on, when you update it may only be the column you want to update and you have to make the query understand this. This normally leads to dynamic sql which is its own nightmare.
The long and short of it is that stored procedures are really good for doing very specific things. The more specific the better most of the time. While this doesn't lead to code reusability and a lot more stored procedures to work with, it makes things a lot easier to troubleshoot, maintain and tune along your applications growth. It is also more intuitive to developers that need to use the database.
If you do want to go this route, the only real way to do it would be if statements trying to sniff our what a user wanted to do based on parameter:
IF @action = 'insert'
INSERT INTO......
I still would not recommend this though.
1) How could I have done this differently and more effectively?
This can be handled better by using what I call "code-based" security, instead of regular "Login/User-based" security. The idea is to essentially grant permissions to one or more modules (Stored Procedures, Triggers, most types of Functions, etc) instead of to Logins or Users. Then you grant EXECUTE
permission to the module(s) to whatever User(s) and/or Role(s) are appropriate.
This is a much cleaner approach as it is far more granular/controllable and does not require IMPERSONATE
permissions, EXECUTE AS
, Cross-Database Ownership Chaining, or TRUSTWORTHY ON
. Your existing Logins / Users can only do the very specific things that are coded into those modules, whereas granting them IMPERSONATE
opens the door for that permission to be used outside of this intended purpose. And this is also more secure since using EXECUTE AS
can also be used outside of this intended purpose, and will still be in effect if someone were to change the definition of the Stored Procedure. In contrast, when you sign a module, the signature is generated from the current definition of that module, including the WITH EXECUTE AS
clause of the CREATE
statement. Meaning, any change to either the module definition or the principal_id
specified for WITH EXECUTE AS
will invalidate the signature and the signature will thus be automatically dropped, resulting in the implied permissions no longer being granted.
You do this by creating either an Asymmetric Key or Certificate** , then create a Login and/or Users, depending on the need, based on that Key or Certificate, and then use that same Key or Certificate to sign the modules that should be able to do things that the users logging in shouldn't be able to do. Finally, you assign the desired permission(s) to the Login and/or User(s) as appropriate.
I have provided several examples and explanation across the following answers:
2) have you noticed that instead of writing the procedure with EXECUTE AS I had to add this line
This is because the EXECUTE AS
clause of the CREATE PROCEDURE
, CREATE FUNCTION
, etc statements works on Users only, which are local to each DB. Logins are server-level. This will become irrelevant upon implementing what I am suggesting in #1.
3) MAYBE if both my login and user were domain accounts, would this make a difference?
That shouldn't make a difference. But again, this will become irrelevant upon implementing what I am suggesting in #1.
I don't want to add any database ownership chain, or trustworthy in order for this work, unless really necessary.
Neither of these are necessary when using signature-based / code-based security as suggested in #1.
**
The main difference between Certificate (which contains a key) and Asymmetric Key is how easy is it to duplicate that key into other databases. If the need is very localized to a single database then an Asymmetric Key is fine. But if you need to manage this across multiple databases, or need a server-level permission, then it is usually easier to use a Certificate since they allow for backing up their key info into a file that can then be used to re-create that same Certificate in other databases, even on other instances. And starting in SQL Server 2012, you can extract the hex bytes that can then be used to recreate them. For Asymmetric Keys, the only way that I have found to create them with a consistent key across all databases is to embed that key info into an Assembly, then load that Assembly into a database, create the Asymmetric Key from the Assembly, and then if the Assembly is no longer needed, drop it. This is a natural approach when working with SQLCLR since you already have an Assembly and it should be signed (with a password -- .pfx
file instead .snk
). But when not working with SQLCLR, then just go with a Certificate.
Best Answer
What you are referring to is deferred name resolution and is further explained here Deferred Name Resolution and Compilation.
Basically, no you are not able to alter the parsers rules for what objects must exist at parse and compile time. I've previously had this issue when deploying SPs which reference linked servers and those linked servers did not have a valid connection. To work around it I updated the linked sever logic to be dynamic SQL.