Sql-server – Comparing two queries in SQL Server 2012

performancequery-performancesql-server-2012

I am comparing two queries in SQL Server 2012. The goal is to use all of the pertinent information available from the query optimizer when choosing the best query. Both queries produce the same results; the maximum orderid for all customers.

Clearing out the buffer pool was done before executing each query with FREEPROCCACHE and DROPCLEANBUFFERS

Using the information provided below, which query is the better choice?

-- Query 1 - return the maximum order id for a customer
SELECT orderid, custid
FROM Sales.Orders AS O1
WHERE orderid = (SELECT MAX(O2.orderid)
                 FROM Sales.Orders AS O2
                 WHERE O2.custid = O1.custid);


-- Query 2 - return the maximum order id for a customer
SELECT MAX(orderid), custid
FROM Sales.Orders AS O1
group by custid
order by custid

STATISTICS TIME

Query 1 STATISTICS TIME: CPU time = 0ms, elapsed time = 24 ms

Query 2 STATISTICS TIME: CPU time = 0 ms, elapsed time = 23 ms

STATISTICS IO

Query 1 STATISTICS IO: Table 'Orders'. Scan count 1, logical reads 5, physical reads 2, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Query 2 STATISTICS IO: Table 'Orders'. Scan count 1, logical reads 4, physical reads 1, read-ahead reads 8, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Execution Plans

enter image description here

SELECT properties Query 1

enter image description here

SELECT properties Query 2

enter image description here

Conclusions:

Query 1

  1. Batch cost 48%
  2. Logical Reads 5
  3. Physical Reads 2
  4. Read-ahead Reads: 0
  5. CPU Time: 0ms
  6. Elapsed Time 24ms
  7. Estimated subtree cost: 0.0050276
  8. CompileCPU: 2
  9. CompileMemory: 384
  10. CompileTime: 2

Query 2

  1. Batch cost 52%
  2. Logical Reads 4
  3. Physcial Reads 1
  4. Read-ahead Reads: 8
  5. CPU Time 0
  6. Elapsed Time 23ms
  7. Estimated subtree cost: 0.0054782
  8. CompileCPU: 0
  9. CompileMemory: 192
  10. CompileTime: 0

Personally, even though Query 2 has a higher batch cost according to the graphical plan, I think its more efficent that Query 1. This because query 2 requires less logical reads, has a slightly lower elapsed time, the compilecpu, compilememory and compiletime values are lower. read-ahead reads are 8 for query 2 and 0 for query 1.

Update 12:03

Clustered Index definition

ALTER TABLE [Sales].[Orders] ADD  CONSTRAINT [PK_Orders] PRIMARY KEY CLUSTERED 
(
    [orderid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO

Non-Clustered Index idx_nc_custid

CREATE NONCLUSTERED INDEX [idx_nc_custid] ON [Sales].[Orders]
(
    [custid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO

Best Answer

I love your approach to careful consideration to query tuning and reviewing options and plans. I wish more developers did this. One caution would be - always test with a lot of rows, looking at the logical reads, this is a smallish table. Try and generate a sample load and run the query again. One small issue - in your top query you are not asking for an order by, in your bottom query you are. You should compare and contrast them each with ordering.

I just quickly created a SalesOrders table with 200,000 sales orders in it - still not huge by any stretch of the imagination. And ran the queries with the ORDER BY in each. I also played with indexes a bit.

With no clustered index on OrderID, just a non-clustered index on CustID The second query outperformed. Especially with the order by included in each. There was twice as many reads on the first query than the second query, and the cost percentages were 67% / 33% between the queries.

With a clustered index on OrderID and a non-clustered index just on CustID They performed in a similar speed and the exact same number of reads.

So I would suggest you increase the number of rows and do some more testing. But my final analysis on your queries -

You may find them behaving more similarly than you realize when you increase the rows, so keep that caveat in mind and test that way.

If all you ever want to return is the maximum OrderID for each Customer, and you want to determine that by the OrderID being the greatest OrderID then the second query out of these two is the best way to go from my mindset - it is a bit simpler and while ever so slightly more expensive based on subtree cost it is a quicker and easier to decipher statement. If you intend on adding other columns into your result set someday? Then the first query allows you do to do that.

Updated: One of your comments under your question was:

Please keep in mind, that finding the best query in this question is a means of refining the techniques used for comparing them.

But best takeaway for doing that- test with more data - always makes sure you have data consistent with production and expected future production. Query plans start looking data when you give more rows to the tables, and try and keep the distribution what you'd expect in production. And pay attention to things like including Order By or not, here I don't think it makes a terrible bit of difference in the end, but still worth digging into.

Your approach of comparing this level of detail and data is a good one. Subtree costs are arbitrary and meaningless mostly, but still worth at least looking at for comparison between edits/changes or even between queries. Looking at the time statistics and the IO are quite important, as is looking at the plan for anything that feels out of place for the size of the data you are working with and what you are trying to do.