A performance issue has been brought to my attention.
It appears an application executes a query from time to time that takes nearly 3 minutes to execute successfully.
The column is of data type 'datetime, not null'
The column is the clustered index of the table.
The query does not use this index however, it uses a non-clustered index of another column that just happens to include this datetime
column. The other column is not referenced in this query.
Example query:
SELECT MIN(DATEADD(second, DATEPART(second, [tablename].[DateAndTime]),
DATEADD(minute, DATEDIFF(minute, 0, [tablename].[DateAndTime]), 0))) AS
[Mindate],
MAX(DATEADD(second, DATEPART(second, [tablename].[DateAndTime]),
DATEADD(minute, DATEDIFF(minute, 0, [tablename].[DateAndTime]), 0)))
AS [Maxdate]
FROM [dbo].[tablename] [tablename]
GROUP BY ();
the resulting output looks like this.
Min Date Max Date
2015-04-01 00:00:00.000 2015-10-05 11:56:27.000
The only thing I can think its doing this is because the query has to peer inside each row and pick apart the date into sections, it has no choice but to go through every row to do this.
I am hoping there is some way the query could be re-written so that it would simply just use the index to bring back the min
and max
values in the DateAndTime
column and maybe format the output last?
Best Answer
Try this approach instead (assuming the point of all the dateadd/datediff operations are just to set the millisecond component to 0). The query was adapted to the AdventureWorks database:
The trick is to find the rows before you start messing with the column values applying all kinds of formatting and date manipulation. This results in a much more efficient plan as long as there is a supporting index on the
OrderDate
column (and no worse than your current query if there is no supporting index):Yes, they are still scans, and there are two instead of one. But the impact on actual execution is undeniable: