1 - can't say for sure, I'd have to go find a server to dig into myself.
2 - yes, I see this periodically in my environment though we're not on sql 2012 yet on the systems we see this from. You may also want to check this post though State 46 seems to be related to having a specific Database=xxx in the connection string, does that db still exist?
The way my network is set up I suspect it's the network's automatic closing of tcp sessions after being 5 minutes idle that's the issue - neither the db nor the client is closing the session so the connection pool still thinks the connection is open and tries to use it only to find it's not really open anymore. You don't mention how the network between your web servers and db is configured, maybe your case is similar.
Another possibility may be the (old, not sure if ever really resolved, see http://support.microsoft.com/kb/942861) issue about TCP Chimney Offload settings.
3 - My understanding is pooling requires exact string matches, so whitespace and different order of parameters would cause different pools. (If I'm wrong on that, please let me know.)
I know this question, based on the Title, is mainly concerned with the PREEMPTIVE_OS_DELETESECURITYCONTEXT wait type, but I believe that is a misdirection of the true issue which is " a customer who was complaining about high CPU usage on their SQL Server ".
The reason I believe that focusing on this specific wait type is a wild goose chase is because it goes up for every connection made. I am running the following query on my laptop (meaning I am the only user):
SELECT *
FROM sys.dm_os_wait_stats
WHERE wait_type = N'PREEMPTIVE_OS_DELETESECURITYCONTEXT'
And then I do any of the following and re-run this query:
- open a new query tab
- close the new query tab
- run the following from a DOS prompt:
SQLCMD -E -Q "select 1"
Now, we know that CPU is high so we should look at what is running to see what sessions have high CPU:
SELECT req.session_id AS [SPID],
req.blocking_session_id AS [BlockedBy],
req.logical_reads AS [LogReads],
DB_NAME(req.database_id) AS [DatabaseName],
SUBSTRING(txt.[text],
(req.statement_start_offset / 2) + 1,
CASE
WHEN req.statement_end_offset > 0
THEN (req.statement_end_offset - req.statement_start_offset) / 2
ELSE LEN(txt.[text])
END
) AS [CurrentStatement],
txt.[text] AS [CurrentBatch],
CONVERT(XML, qplan.query_plan) AS [StatementQueryPlan],
OBJECT_NAME(qplan.objectid, qplan.[dbid]) AS [ObjectName],
sess.[program_name],
sess.[host_name],
sess.nt_user_name,
sess.total_scheduled_time,
sess.memory_usage,
req.*
FROM sys.dm_exec_requests req
INNER JOIN sys.dm_exec_sessions sess
ON sess.session_id = req.session_id
CROSS APPLY sys.dm_exec_sql_text(req.[sql_handle]) txt
OUTER APPLY sys.dm_exec_text_query_plan(req.plan_handle,
req.statement_start_offset,
req.statement_end_offset) qplan
WHERE req.session_id <> @@SPID
ORDER BY req.logical_reads DESC, req.cpu_time DESC
--ORDER BY req.cpu_time DESC, req.logical_reads DESC
I usually run the above query as it is, but you could also switch which ORDER BY clause is commented out to see if that gives more interesting / helpful results.
Alternatively you can run the following, based on dm_exec_query_stats, to find highest-cost queries. The first query below will show you individual queries (even if they have multiple plans) and is ordered by Average CPU Time, but you can easily change that to be Average Logical Reads. Once you find a query that looks like it is taking a lot of resources, copy the "sql_handle" and "statement_start_offset" into the WHERE condition of the second query below to see the individual plans (can be more than 1). Scroll to the far right and assuming there was an XML Plan, it will display as a link (in Grid Mode) which will take you to the plan viewer if you click on it.
Query #1: Get Query Info
;WITH cte AS
(
SELECT qstat.[sql_handle],
qstat.statement_start_offset,
qstat.statement_end_offset,
COUNT(*) AS [NumberOfPlans],
SUM(qstat.execution_count) AS [TotalExecutions],
SUM(qstat.total_worker_time) AS [TotalCPU],
(SUM(qstat.total_worker_time * 1.0) / SUM(qstat.execution_count)) AS [AvgCPUtime],
MAX(qstat.max_worker_time) AS [MaxCPU],
SUM(qstat.total_logical_reads) AS [TotalLogicalReads],
(SUM(qstat.total_logical_reads * 1.0) / SUM(qstat.execution_count)) AS [AvgLogicalReads],
MAX(qstat.max_logical_reads) AS [MaxLogicalReads],
SUM(qstat.total_rows) AS [TotalRows],
(SUM(qstat.total_rows * 1.0) / SUM(qstat.execution_count)) AS [AvgRows],
MAX(qstat.max_rows) AS [MaxRows]
FROM sys.dm_exec_query_stats qstat
GROUP BY qstat.[sql_handle], qstat.statement_start_offset, qstat.statement_end_offset
)
SELECT cte.*,
DB_NAME(txt.[dbid]) AS [DatabaseName],
SUBSTRING(txt.[text],
(cte.statement_start_offset / 2) + 1,
CASE
WHEN cte.statement_end_offset > 0
THEN (cte.statement_end_offset - cte.statement_start_offset) / 2
ELSE LEN(txt.[text])
END
) AS [CurrentStatement],
txt.[text] AS [CurrentBatch]
FROM cte
CROSS APPLY sys.dm_exec_sql_text(cte.[sql_handle]) txt
ORDER BY cte.AvgCPUtime DESC
Query #2: Get Plan Info
SELECT *,
DB_NAME(qplan.[dbid]) AS [DatabaseName],
CONVERT(XML, qplan.query_plan) AS [StatementQueryPlan],
SUBSTRING(txt.[text],
(qstat.statement_start_offset / 2) + 1,
CASE
WHEN qstat.statement_end_offset > 0
THEN (qstat.statement_end_offset - qstat.statement_start_offset) / 2
ELSE LEN(txt.[text])
END
) AS [CurrentStatement],
txt.[text] AS [CurrentBatch]
FROM sys.dm_exec_query_stats qstat
CROSS APPLY sys.dm_exec_sql_text(qstat.[sql_handle]) txt
OUTER APPLY sys.dm_exec_text_query_plan(qstat.plan_handle,
qstat.statement_start_offset,
qstat.statement_end_offset) qplan
-- paste info from Query #1 below
WHERE qstat.[sql_handle] = 0x020000001C70C614D261C85875D4EF3C90BD18D02D62453800....
AND qstat.statement_start_offset = 164
-- paste info from Query #1 above
ORDER BY qstat.total_worker_time DESC
Best Answer
There are restrictions and difficulties with sql_variant type. See MSDN for details: SQL Variant
As you already "feel" you might also trap into problems with casting and converting lateron. For samples see 10 reasons to explicitly convert SQL Server data types.
I would try to avoid variant in this and any other case if possible.
But your basic problem is not the variant type but more your table design. Even if you had only integer values in your log tables, I would still think about changing it to some more normalization.
However after thinking about your problem I came to another approach that maybe worth a try: Why don't design your log table the same as the original table.
For example:
That way you could add a new row each time the original row was changed and add the previous row into the log table. This will give you a complete history of the rows without any datatype issues.
You could also limit the logging to changes of specific columns. I think you will need triggers to realize it.
It will give you multiple log tables but if you keep them similar regarding
log_type
andlog_stamp
you could still query the entire history of any changes (of tracked tables) byUNION
them.I have found another question where this and other possible ways to solve it are explained pretty detailed: How to store historical records in a history table in SQL Server