You can look at sys.dm_exec_query_stats to find queries which are causing the highest total_worker_time.
SELECT TOP 20
ST.text,
QS.*
FROM sys.dm_exec_query_stats QS
CROSS APPLY sys.dm_exec_sql_text(sql_handle) ST
ORDER BY total_worker_time
No, SQL Server does not track individual occurrences of queries, for the same reason you don't want to run profiler: tracing constantly can have a substantial performance impact on the server, and the last thing Microsoft wants to do is to turn on high-cost diagnostics most people will rarely or never use. Profiler is the absolute last tool you want to use for this anyway, as it is the worst offender. A server-side trace is much lighter weight, but you should also look into extended events and perhaps even 3rd party monitoring tools - I work for a vendor of one such tool and would be happy to give you guidance there.
In the meantime, you can track expensive queries in aggregate using DMVs like sys.dm_exec_procedure_stats
and sys.dm_exec_query_stats
. Here's a procedure from Glenn Berry's Diagnostic Queries that gets the top 25 CPU-consuming stored procedures for a specific database:
SELECT TOP(25) p.name AS [SP Name], qs.total_worker_time AS [TotalWorkerTime],
qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.execution_count,
ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GETDATE()), 0) AS [Calls/Second],
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count
AS [avg_elapsed_time], qs.cached_time
FROM sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_worker_time DESC OPTION (RECOMPILE);
You can do a similar thing with sys.dm_exec_query_stats
if you have a lot of ad hoc queries. For example:
SELECT TOP(25) SUBSTRING(t.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(t.text)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS statement_text,
qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.execution_count,
ISNULL(qs.execution_count/DATEDIFF(Second,qs.creation_time,GETDATE()),0) AS [Calls/Second],
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time]
FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS t
ORDER BY AvgWorkerTime DESC OPTION (RECOMPILE);
In fact a similar example can be found in the documentation.
Note that these DMVs do not persist data beyond a service restart and some other events, so you might not have the history you expect.
Best Answer
Please look at the three main queries in the following answer (of mine):
SQL Server performance: PREEMPTIVE_OS_DELETESECURITYCONTEXT dominant wait type
The first query looks at currently running queries.
The second two queries (labeled "Query #1" and "Query #2") work in combination to find high-cost queries over a period of time.
To get alerts, look into options such as Solarwinds Free WMI Monitor or Spiceworks Network Monitor.
You need monitoring software of some nature. Those were 2 that I found for Windows that were free. Most of the free ones appear to be for Unix, or are at least Unix-based. You can also look at Splunk and Zabbix. You can use the queries I provided in the linked answer with at least some of those products. But they should do the "alert on high CPU for X minutes" thing.
You could also write your own system that polls on a scheduled, is scheduled, stores data in SQL Server, ages out old data, calculates % variance over a configurable time-frame, and takes action upon that condition being met. This is a lot more work as you now have even more code to maintain (but is still a viable option if the above doesn't work out for you).