I dug into this and it looks like the SQL Server team went out of their way to prevent internal table stats from being exposed through DMFs and DMVs.
I can't say I blame them, as the implementation of things like queues is always subject to change, but it doesn't help you out with your problem. I can see why in this case it's really difficult to balance between exposing metadata and exposing implementation details.
Anyway, let's play around with a few things and see what happens.
If I grab an object_id
from sys.internal_tables
and plug it into sys.dm_db_index_physical_stats
(the obvious thing to do), something interesting happens -- on 2005, you get an empty result set and a level 16 error (msg 2561, state 10); on 2008 R2, you get just an empty result set with no error. In contrast, if you put in some random number on 2005 you get the same error, while on 2008 R2 you now get a different level 16 error (msg 2573, state 40).
This tells me there is some internal checking going on specifically to handle the case of calling this function (and sys.dm_db_index_operational_stats
-- probably all of them) with a valid, yet non-user-table object_id
. (Note: using the OBJECT_ID
function on an internal table name prefixed with sys.
does return a result.)
Since DBCC DBREINDEX
"solves" the fragmentation problem, I figured I'd try out something else from the same era -- DBCC SHOWCONTIG
, the precursor to sys.dm_db_index_physical_stats
.
If I feed it a table name from sys.internal_tables
(again, prefixed with sys.
), it returns nothing -- no error, just a dissatisfying DBCC execution completed. If DBCC printed error messages, contact your system administrator.
-- while giving it a random object_id
results in a level 16 error (msg 2501, state 45). This is a pretty clear indication there's an explicit check for this case in the code.
I tried all this using a normal connection and the DAC with the same result. For the sake of curiosity, I even tried started the server in single-user mode and used the DAC, but that didn't change anything.
Hacking into the resource database to dump the definitions of the DMFs yielded nothing except internal-only OPENROWSET
syntax, which isn't particularly helpful (kind of interesting, though).
While I was hacking into the resource database, I also dumped out the internal assemblies and used Reflector to look at what's inside, but that didn't yield anything (but was likewise interesting).
In any event, I think the only reasonable recourse is doing as the article suggests -- running the deprecated DBCC DBREINDEX
from a job as frequently as you need. There just isn't any visibility into these kinds of stats, at least not that I can find, and to me at least, it appears that these stats have been intentionally hidden.
Sadly, even if you were okay writing your own algorithm, the undocumented DBCC IND
fails the same way as DBCC SHOWCONTIG
, which is really too bad, because this probably would have given you a decent shot. As an academic exercise you could do it with output from DBCC PAGE
, but obviously that's nowhere close to a practical (or efficient) solution.
This is an open question with no clear choice. YMMV so you have to test. Here is my opinion:
Having one queue to handle everything is a good choice if you want to be able to control the number of activated tasks, as there is no global max_queue_readers. Other than that, I don't see many advantages. One could argue that one single activated proc is easier to maintain, but I think you should use some sort of template/code generation when creating SSB activated process, since typically much is cookie-cutter code and little is specific to service business logic.
However I can see several potential problems with a single queue, and they're all the kind of problems that manifest only when things are already going bad (ie. under stress/load):
- latency and processing serialization. With only one queue is possible to plateau the max_queue_readers at max and have messages which require little processing sit in the queue waiting their turn to be picked up. Separate queues allow low latency services to drain their queues fast, while high load queues churn at their own (slower) pace.
- a spike of messages for any service affects all services. Same as above, but for spikes. A dump of a high number of messages (eg. some manual 'oop we need to reprocess this 1MM items') will cause all the other services on the same queue to wait until the spike is drained before they get access to their messages (is a bit more complex due to conversation group locking, but ultimately processing is FIFO).
- avoidable visits into the ugly land of large queues. If big-queue problems occur on your RAM/IO/CPU/activate code at N messages, one-queue-for-all-services will reach N faster/more often than one-queue-per-service.
In general my recommendation is to have one-queue-for-each-service.
Best Answer
As you can see on this answer, have a look at which databases have service broker enabled (on both target and source servers)
you can use the following query to check which databases have the broker enabled:
I see you mention the security update and patching - well done for applying it!
Depending on the environment we sometimes use db chaining ownership on, and some times trustworthy on, instead of module signing which is the best way in most cases.
anyway, if you are not using the broker, please switch of all of these features for each database, i.e. disable the broker, and switch it off trustworthy and db ownership chain.
--one of my examples of enabling some of these features and renewing the broker id:
You want something like these (provided you are not using any of these features):
and keep monitoring the tempdb usage, I don't think you need neither to restart the service, nor do a failover.
In any case if you are on alwaysOn - Availability Group you may need to remove the database from the availability group before you can disable the broker.