Sql-server – sp_primary_keys_rowset is consuming a lot of CPU

ado.netperformancesql serversql server 2014

I manage a database server that supports a C#/VB.NET application and we've had a lot of CPU usage problems. I am using the following dmv query from Glenn Berry at SQLSkills:

SELECT TOP(50) DB_NAME(t.[dbid]) AS [Database Name], 
REPLACE(REPLACE(LEFT(t.[text], 255), CHAR(10),''), CHAR(13),'') AS [Short Query Text],  
qs.total_worker_time AS [Total Worker Time], qs.min_worker_time AS [Min Worker Time],
qs.total_worker_time/qs.execution_count AS [Avg Worker Time], 
qs.max_worker_time AS [Max Worker Time], 
qs.min_elapsed_time AS [Min Elapsed Time], 
qs.total_elapsed_time/qs.execution_count AS [Avg Elapsed Time], 
qs.max_elapsed_time AS [Max Elapsed Time],
qs.min_logical_reads AS [Min Logical Reads],
qs.total_logical_reads/qs.execution_count AS [Avg Logical Reads],
qs.max_logical_reads AS [Max Logical Reads], 
qs.execution_count AS [Execution Count], qs.creation_time AS [Creation Time]
--,t.[text] AS [Query Text], qp.query_plan AS [Query Plan] -- uncomment out these columns if not copying results to Excel
FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS t 
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp 
ORDER BY qs.total_worker_time DESC OPTION (RECOMPILE);

and by a wide margin, the top 5 queries are all this sys.sp_primary_key_rowset procedure. It looks like it's not documented in SQL Server's documentation, but some research tells me that it might have something to do with ADO and the application needing to find out more info about the table before running a query? I can't seem to find any good info on this proc.

What causes this, how can I determine what queries are causing it, and most importantly, can we make it run less often? It doesn't look like it's that expensive when it runs, only 100ms on average, but it's running 120+ times a minute.

Best Answer

sys.sp_primary_keys_rowset returns details about the primary key columns present in the table name that is passed-in via the @table_name and @table_schema parameters.

The stored procedure has the following definition:

create procedure sys.sp_primary_keys_rowset
(
    @table_name         sysname,
    @table_schema       sysname = null
)
as
    select
        TABLE_CATALOG   = s_pkv.TABLE_CATALOG,
        TABLE_SCHEMA    = s_pkv.TABLE_SCHEMA,
        TABLE_NAME      = s_pkv.TABLE_NAME,
        COLUMN_NAME     = s_pkv.COLUMN_NAME,
        COLUMN_GUID     = s_pkv.COLUMN_GUID,
        COLUMN_PROPID   = s_pkv.COLUMN_PROPID,
        ORDINAL         = s_pkv.ORDINAL,
        PK_NAME         = s_pkv.PK_NAME
    from
        sys.spt_primary_keys_view s_pkv
    where
        (@table_schema is null and s_pkv.TABLE_NAME = @table_name) or
        s_pkv.object_id = object_id(quotename(@table_schema) + '.' + quotename(@table_name))
    order by 2, 3

Although we can't directly see the source of the sys.spt_primary_keys_view view referenced in the above code, we can indirectly see that the view gathers data from various system objects, including:

  • syssingleobjrefs
  • sysidxstats
  • sysobjvalues
  • sysschobjs
  • syscolpars

This is what part of an "actual" execution plan looks like when I run the store procedure on my test SQL Server 2016 instance:

enter image description here

As you can see, it's fairly involved. Having said that, I wouldn't expect this stored procedure to be negatively affecting performance unless it is running against a database containing a very large number of objects. Glenn's query is returning the top 50 rows ordered by total_worker_time, which is the total amount of CPU time, in microseconds, consumed by executions of this plan since it was compiled. It's quite possible this procedure is showing up at the top of the list as a red-herring; if the number of executions is extremely high it may be consuming a lot of CPU time without actually having a seriously negative impact on performance. For example, if the procedure runs for 100ms at the start of a batch that takes 3 seconds, it's responsible for 1/30th of the run time. If you change the ORDER BY clause to return results in order of average run time, or perhaps total execution count, you may see something more meaningful.

I wrote about how to look at wait stats to see what, if anything, is slowing SQL Server down.