In SQL Server there are a lot of DMV/Fs and other options which allow me to get information about query plans that are in cache, but I can't find any method to get information about all query plans, not just ones saved in cache. Can you, please, show how can I explore all query plans such as a number of times it has been used, its size, etc.?
Sql-server – How to get information about query plans that are out of cache
execution-plansql server
Related Solutions
"How bad is it?" depends on the degree to which you are suffering now or could suffer with increased workload in the future.
One major point of suffering with plan cache pollution could be too many single use plans bloating your plan cache leading to inefficient cache usage.
Another point of suffering could be high compilations/second - so in an environment with a heavy workload and a lot of activity, there is a cost associated with compiling over and over.
You can see the impact of compilations/sec in perfmon (SQL Server Statistics:Compilations/sec). This can look like CPU pressure. To your performance/applications, this can look like increased query duration waiting for needless compiles each time it runs.
You can see the impact to the plan cache from the memory bloat by this query borrowed from Glenn Berry's Diagnostic scripts. How big is your SQLCP plan cache?
SELECT TOP(10) [type] AS [Memory Clerk Type],
SUM(pages_kb)/1024 AS [Memory Usage (MB)]
FROM sys.dm_os_memory_clerks WITH (NOLOCK)
GROUP BY [type]
ORDER BY SUM(pages_kb) DESC OPTION (RECOMPILE);
Also the query that was used in the question to identify the number of plans helps as well.
Is This Ever a Good Thing?
There are some cases where this could be good, but the situation is rare. Basically if you were suffering from parameter sniffing gone bad (nutshell: if the data can vary widely from execution to execution based on parameters, one compilation for one set of parameters ideal may yield an excellent query plan for that one query but poor for others.). My guess is that you likely wouldn't be dealing with that as bad as the implications from poor plan reuse.
What Can You Do About It?
Optimize For Ad Hoc Workloads can certainly help with the memory implications since only a stub of the plan is stored in cache at first execution, and the full plan isn't stored until it is executed a second time with the same plan.
Forced Parameterization could help here also. It can sometimes force parameterization to happen and help solve both the issue of cache bloat and the cost of having to recompile.
Fix The Queries Ideally, you shouldn't have to resort to these options, but instead can be more strict in your database development, encourage plan reuse, consider stored procedures for all of their benefits, and attempt to head off the problem that way. The ways to help fix this through forced parameterization or optimize for ad hoc are good to help, but the best solution is always aimed at the root cause.
There is an excellent resource here that talks about some of the dangers of plan cache pollution and some things you can do. I'd recommend a read here. It is written for SQL Server 2012, but the concepts and solutions apply.
Sadly, backupset
does not contain failed backups, and I don't know of anywhere else in msdb
these may be stored, unless you can rely on sysjobhistory
, which doesn't contain all of time (depending on your retention settings), and which would ignore any backup attempts that were made outside the context of a job, and which - in the case of a job that backs up many databases - would not provide differentiation about which database actually failed, unless it happened to happen early on in the job - this is because the messaging is quite verbose but gets truncated.
If you absolutely know that Job n
only backs up the one database, and that every failure of that job means that the database wasn't backed up (since the job could also fail after the backup succeeded, e.g. trying to shrink or perform other maintenance), then you could use a query like this:
DECLARE @job sysname, @db sysname;
SELECT @job = N'Job 1', @db = N'db_name';
SELECT
bs.database_name,
bs.backup_start_date,
bs.backup_finish_date,
[Total Time] = CAST((DATEDIFF(SECOND, bs.backup_start_date,bs.backup_finish_date))
AS varchar(30))+ ' secs',
CAST(bs.backup_size/1024/1024 AS decimal(10,2)) AS 'Backup Size(MB)',
h.[message]
FROM msdb.dbo.sysjobhistory AS h
INNER JOIN msdb.dbo.sysjobs AS j
ON h.job_id = j.job_id
AND h.step_id = 0
LEFT OUTER JOIN msdb.dbo.backupset AS bs
ON bs.database_name = @db
AND
ABS(DATEDIFF(SECOND, bs.backup_start_date, CONVERT(DATETIME,convert(char(8),h.run_date)
+ ' ' + STUFF(STUFF(RIGHT('0'+CONVERT(char(6),h.run_time),6),3,0,':'),6,0,':')))) < 5
WHERE j.name = @job
ORDER BY bs.backup_start_date;
Yes, it's really ugly, because sysjobhistory
still, in SQL Server 2014 even, stores run_date
and run_time
as separate integers. I bet whoever made that decision is still on the background of dartboards all over building 35. It also assumes that the backup is the very first step in the job, hence the rather less than scientific date/time comparison to make sure we've properly correlated the right instance of the job to the right instance of the backup. Oh, how I wish I could redesign the schema for backups and jobs.
If you want broader scope outside of the job, you can look for failed backups in the SQL Server error log (if they haven't been cycled away):
EXEC sp_readerrorlog 0, 1, 'BACKUP failed'; -- current
EXEC sp_readerrorlog 1, 1, 'BACKUP failed'; -- .1 (previous)
EXEC sp_readerrorlog 2, 1, 'BACKUP failed'; -- .2 (the one before that)
....
(But I don't know of a nice and easy way to incorporate that output into your existing query.)
You can also correlate "missing" successful backups from the default trace, e.g.
DECLARE @path nvarchar(260);
SELECT
@path = REVERSE(SUBSTRING(REVERSE([path]),
CHARINDEX(CHAR(92), REVERSE([path])), 260)) + N'log.trc'
FROM sys.traces
WHERE is_default = 1;
SELECT dt.DatabaseName, dt.StartTime, bs.backup_start_date, bs.backup_finish_date,
[Status] = CASE WHEN bs.backup_start_date IS NULL
THEN 'Probably failed'
ELSE 'Seems like success'
END
FROM sys.fn_trace_gettable(@path, DEFAULT) AS dt
LEFT OUTER JOIN msdb.dbo.backupset AS bs
ON dt.DatabaseName = bs.database_name
AND ABS(DATEDIFF(SECOND, dt.StartTime, bs.backup_start_date)) < 5
WHERE dt.EventClass = 115 -- backup/restore events
AND UPPER(CONVERT(nvarchar(max),dt.TextData)) LIKE N'BACKUP%DATABASE%'
--AND dt.DatabaseName = N'db_name' -- to filter to a single database
--AND bs.database_name = N'db_name'
ORDER BY dt.StartTime;
Of course this also relies on the data from the default trace cycling away, the database name not having changed, etc. And unfortunately, the default trace doesn't differentiate between successful and failed backups, and the start time will not precisely match the MSDB data, but as long as you're not running backups in a loop, this should be okay for eyeballing. I've tried to incorporate these issues into the query.
Finally, you may want to use a FULL OUTER JOIN
there, in case backupset has longer history than the default trace. This changes the semantics of [Status]
slightly.
You also might want to give this nasty thing a try, though I didn't have much luck with it. I was only able to see the current or most recent status, so that only helped when the job failed the last time it ran, and - like sysjobhistory
- wasn't able to obtain information about any backups that were attempted but not through a job.
Related Question
- Sql-server – Optimize for ad-hoc workload
- Sql-server – What are the SET options that affect Plan Reuse and how to get their values in T-SQL
- Sql-server – Redundant plans in cache because of missing type length
- Sql-server – How to include “Actual Row” counts in Execution plans pulled from the Query Cache or Query Store
- Sql-server – Disable a bad SQL Server Query Plan vs. Force a good plan
- Sql-server – When are non-parameterized, non-trivial, adhoc query plans reused
Best Answer
As Dan noted, you can use Query Store on SQL Server 2016+.
If you're on an earlier version, you can try Open Source Query Store, though I don't have any experience with using it.
As a last resort, you could log sp_BlitzCache to a table. There are instructions on doing that in the README. I'm not going to post them here. Fair warning: I contribute to that open source project.