SQL Compilations/sec is a good metric, but only when coupled with Batch Requests/sec. By itself, compilations per sec doesn't really tell you much.
You are seeing 170. If batch req per sec is only 200 (a little exaggerated for effect) then yes, you need to get down to the bottom of the cause (most likely an overuse of ad hoc querying and single-use plans). But if your batch req per sec is measuring about 5000 then 170 compilations per sec is not bad at all. It's a general rule of thumb that Compilations/sec should be at 10% or less than total Batch Requests/sec.
If you really want to drill down into what's being cached, run the following query that utilizes the appropriate DMVs:
select
db_name(st.dbid) as database_name,
cp.bucketid,
cp.usecounts,
cp.size_in_bytes,
cp.objtype,
st.text
from sys.dm_exec_cached_plans cp
cross apply sys.dm_exec_sql_text(cp.plan_handle) st
To get all single-use plans (a count):
;with PlanCacheCte as
(
select
db_name(st.dbid) as database_name,
cp.bucketid,
cp.usecounts,
cp.size_in_bytes,
cp.objtype,
st.text
from sys.dm_exec_cached_plans cp
cross apply sys.dm_exec_sql_text(cp.plan_handle) st
)
select count(*)
from PlanCacheCte
where usecounts = 1
To get a ratio of how many single-use count plans you have compared to all cached plans:
declare @single_use_counts int, @multi_use_counts int
;with PlanCacheCte as
(
select
db_name(st.dbid) as database_name,
cp.bucketid,
cp.usecounts,
cp.size_in_bytes,
cp.objtype,
st.text
from sys.dm_exec_cached_plans cp
cross apply sys.dm_exec_sql_text(cp.plan_handle) st
where cp.cacheobjtype = 'Compiled Plan'
)
select @single_use_counts = count(*)
from PlanCacheCte
where usecounts = 1
;with PlanCacheCte as
(
select
db_name(st.dbid) as database_name,
cp.bucketid,
cp.usecounts,
cp.size_in_bytes,
cp.objtype,
st.text
from sys.dm_exec_cached_plans cp
cross apply sys.dm_exec_sql_text(cp.plan_handle) st
where cp.cacheobjtype = 'Compiled Plan'
)
select @multi_use_counts = count(*)
from PlanCacheCte
where usecounts > 1
select
@single_use_counts as single_use_counts,
@multi_use_counts as multi_use_counts,
@single_use_counts * 1.0 / (@single_use_counts + @multi_use_counts) * 100
as percent_single_use_counts
As for durations captured through a SQL Server Trace, it is not available for the Recompile events. It isn't so significant to see the duration or pain that plan compilation is causing, as there's not much you can do for a case-by-case situation. The solution is to attempt to limit compilations and recompilations through plan re-use (parameterized queries, stored procedures, etc.).
I'd love to see a screenshot of the Virtual Memory applet, here's the one from my notebook:
SQL Server does not interact directly with the pagefile. Only the operating system itself is capable of doing that since the O/S holds an exclusive lock on the file while it is using it.
If SQL Server is the only load running on the machine (it should be for all but the simplest installations), and is correctly configured, the page file should be for most purposes irrelevant.
The SQL Server database engine has some very sophisticated memory management techniques, which can be managed by settings such as lock pages in memory
, LargePageAllocator
(see http://support.microsoft.com/kb/920093 for details), max server memory
and min server memory
, etc.
In my opinion, for a machine dedicated to SQL Server, I typically set the page file to a low amount, such as 4GB. I set max server memory
to an amount close to the amount of physical ram in the machine (less than 94% of memory, leaving at least 2GB free), and set min server memory
to the same amount. I then watch performance counters related to the page file, such as pages/sec
etc, and SQL Server's Page Life Expectancy
(see http://blogs.msdn.com/b/mcsukbi/archive/2013/04/12/sql-server-page-life-expectancy.aspx) to make sure SQL Server is not paging memory to disk. If SQL Server is paging to disk, or the Page Life Expectancy is low (this is the number of seconds a page is retained in the buffer pool on average) I will request that physical memory be increased in the machine.
For instance, on the last production machine I worked on, we had 192GB of RAM, and had the page file set to 4GB. The default setting Windows "suggested" would have been 288GB, a clearly ridiculous amount.
For further info on this topic, see:
https://dba.stackexchange.com/a/20492/10832
http://www.brentozar.com/archive/2011/09/sysadmins-guide-microsoft-sql-server-memory/
https://dba.stackexchange.com/a/10201/10832
Best Answer
I see from your comment that the errors are not happening so frequently as to be a performance concern.
To answer the question asked, you could create a filtered XE trace to capture detail sql_batch_completed and rpc_completed events where result is Error. The events can then be summarized by a time interval for aggregated stats.
The example below summarizes the CPU and logical reads by one minute interval from a trace file target.