It is very difficult to accurately suggest a solution with no ability to run testing, or see how the DB is indexed etc. But I'm going to try anyway.
You ideally need to find a balance, if your query is likely to return a lot of data and run quickly then I would err towards running the query on the main table twice, if it is likely to take a long time and return a relatively small number of rows then I would stick with the temp table approach.
Given the information you have provided in the question it appears as though there is no problem with the speed of the select, in which case I would tend to agree with you, the additional cost of inserting into a temp table, then selecting from the temp table is as much overhead as performing the select query twice. The queries can be simplified as follows:
SELECT COUNT(*) [TotalHolidaysCount],
COUNT(DISTINCT PropertyID) [PropertyCount]
FROM dbo.Holiday
WHERE DepartureDate > GETDATE()
AND Country = 'FR'
SELECT *
FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY(CASE WHEN StartDate <= '01/Apr/2013' THEN 1 ELSE 0 END) ASC, [Price] ASC) [RowNumber]
FROM ( SELECT h.*, ROW_NUMBER(PARTITION BY PropertyID, ORDER BY Price ASC) [PropertyRow]
FROM dbo.Holiday
WHERE DepartureDate > GETDATE()
AND Country = 'FR'
) h
WHERE PropertyRow = 1
) h
WHERE Rownumber BETWEEN 1 AND 10
If you do find this is a performance hit and you would rather cache the results of the select I'd be inclined to use table variables instead of temp tables, and only store the Holiday ID primary key, and join back to holiday on the fast indexed join of primary key = primary key as follows:
DECLARE @Results TABLE (ID INT NOT NULL PRIMARY KEY)
INSERT @Results
SELECT ID
FROM dbo.Holiday
WHERE DepartureDate > GETDATE()
AND Country = 'FR'
SELECT COUNT(*) [TotalHolidaysCount],
COUNT(DISTINCT PropertyID) [PropertyCount]
FROM dbo.Holiday h
INNER JOIN @Results r
ON r.ID = h.ID
SELECT *
FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY(CASE WHEN StartDate <= '01/Apr/2013' THEN 1 ELSE 0 END) ASC, [Price] ASC) [RowNumber]
FROM ( SELECT h.*, ROW_NUMBER(PARTITION BY PropertyID, ORDER BY Price ASC) [PropertyRow]
FROM dbo.Holiday h
INNER JOIN @Results r
ON r.ID = h.ID
) h
WHERE PropertyRow = 1
) h
WHERE Rownumber BETWEEN 1 AND 10
This way you are caching as little data as possible (one column of integers), while still retaining enough data to do a fast indexed search on dbo.Holiday.
It ultimately has to come down to looking at your execution plans, creating proper indexes and testing various approaches to find the one that best suits you.
Read this article from Paul Randal on wait stats. You find what sql server is waiting on using the sys.dm_os_wait_stats DMV(Data Management View). It sounds like it MIGHT be a query plan change between versions. But there are a lot of things it could be. Could be storage too. Wait stats should help. Check to see if the MAXDOP setting was different between the versions if possible, you stated the original server died, but not sure if you have the server config saved in a repository for DR purposes. If it's a highly transactional system, parallelism can slow you down. Make sure you know what you are doing before making any changes to MAXDOP.
Also check this article from this thread on SO , it will help you find the worst performing queries. And for more information, check Brent Ozar's Blitz scipt
Hope this helps, Good luck.
Chris
Best Answer
Instead of having a non-clustered index containing all the columns of your table, you would be better off having a clustered index for the table. Find the column(s) that uniquely identifies each row and use that as your clustering key.
I am assuming that you do not already have a clustered index on that table seeing that it was decided to create a non-clustered index containing all the columns instead.