You only specify the password to decrypt the private key file with. You need to add a password to store the certificate with:
CREATE CERTIFICATE ERecruitStatsGatheringCert
ENCRYPTION BY PASSWORD = 'S3creT!
FROM FILE = 'd:\DavesCert.cer'
WITH PRIVATE KEY (
FILE = 'd:\DavesCert.pvk',
DECRYPTION BY PASSWORD = 'S3creT!');
But your Background information makes the whole exercise futile. You're doing it wrong. The correct sequence of actions is:
- Create a cert in the user DB
- Add the signature to the store procedure(s)
- Drop the private key of the certificate
- Backup/restore that cert to
master
(public key only!)
- Create login in
master
, assigned that permission, etc.
Notice that not only the private key never leaves the database, is in fact explicitly dropped right after signing the procedure. This is required in order to prevent further use of this certificate to sign other procedures and abuse the login permissions created at step 5. You repeat these steps on each database and use a different certificate on each database. You repeat these steps each time you modify any of the signed procedure and generate a new certificate each time.
As a general rule, any signature/encryption public/private key (RSA) scheme that requires a copy of the private key is broken. This is why is called private and this is what gives value to the signature: the knowledge that there exists only one instance of this private key in the world, therefore anything signed by it is proof that it came from the one and only owner/holder of the unique private key.
I know there are some who shun my recommendation of dropping the private key right after is used to sign the procedure. I stand by my recommendation, but the important thing for your problem is that it is not required to copy the private key into [master]
in order to leverage the the code signing permission you desire. You can (and should) create the login derived from the certificate using just the public key.
Regarding automation: As I see it, basically it is a tool (script, app) that takes two inputs: a securable (procedure, function, module) and a permission (VIEW SERVER STATE
). Everything else is an automated process (create a one-time-use throw-away cert, sign, export cert, import cert in [master]
, create login, grant permission). The names implied (cert name, login name) can all be generated.
Proliferation of names is a valid concern. An alternative would be to use only one certificate and associated private key, and the signature tool could add the private key when needed (when signing the procedure), then remove it after the signing. Is really a matter of how you run your shop and how important is the asset you're protecting. But the important thing is that you do not need to import the private key in [master]
.
You can use a combination of tools. I'll give you an abridged list of the ones I use.
First, I assume you have database mail configured.
Second, you can trigger a lot of run-of-the-mill alerts using SQL Server Alerts. You can also run a stored procedure using a scheduled job (say every five minutes or some interval like that) and fire the user counters which will trigger SQL Server Alerts. When an alert happens, you can also execute a job in addition to notifying operators. If you know WMI, you can use Alerts to get to PerfMon counters.
Third, you can use SQL Server Extended Events, but they are more complicated and require a good understanding of the database engine and hand coding because there is very little in the way of GUI.
Finally, you can just schedule a stored procedure to run on a regular basis, allow it to collect metrics from the DMVs (database management views) and then send the email directly from the stored procedure.
Specifically for your situation, I might suggest you run sys.dm_os_performance_counters on a regular basis (scheduled job/stored procedure) and cache it in a metadata table, then use that for analysis.
Also, getting CPU usage information in SQL Server is not the easiest thing in the world because CPU stats can change on such a finely-granular timescale. Try this article for a good overview of how it works. Perhaps look at "free pages" to get you 70% RAM figure.
Best Answer
There are indeed multiple methods.
Query Store
Since you have SQL Server 2016, may I recommend Query Store? You can enable it per database from the "properties" menu in SSMS, or by running something similar to the following script:
Once it is turned on, you can look at query performance details.
Gotchas: Fair warning - turning on Query Store will clear the plan cache! Query Store, as a newer feature, can have some quirks. I've run across several myself. As far as performance, I see it used in environments with 20k batches/sec and only a couple of percent CPU overhead.
DMVs
Another option is to use the dmvs. The one you are looking for in this case is
sys.dm_exec_query_stats
, specifically the "elapsed_time" columns. Here's a query to look at recent long-duration queries.Gotchas to be mindful of with this dmv are that it doesn't capture
RECOMPILE
queries, and it doesn't capture canceled queries (which is what happens with a query timeout from the application).SSMS
Finally, if you know the exact query, you can run it in SSMS and look at the actual execution plan. Highlight the select, and look at the properties window on the right. You'll see extremely helpful information about CPU and duration (in milliseconds). Similar information exists for the operators as well.
Gotchas with this method include the fact that queries submitted through SSMS do not necessarily reflect what happens with application queries. Canonical link here.
In short, I highly recommend Query Store.