Well, you could consider a filtered index - if you're always looking for rows where IsSynchronized = 0
and this number should be relatively small, then instead of those two indexes, consider this instead:
CREATE NONCLUSTERED INDEX [IX_NotSynchronized]
ON [dbo].[PackageEvents] ([PackageID])
INCLUDE ([EventDate], [EventDescription], [EventID],
[LastSyncDate], [Notes], [UserName], [Version])
WHERE IsSynchronized = 0;
Of course you may want to make that even smaller and test to see the difference in impact if the query has to look up the data (should be pretty efficient if the number of rows is small), so - assuming PackageID
is the clustering key:
CREATE NONCLUSTERED INDEX [IX_NotSynchronized]
ON [dbo].[PackageEvents] ([PackageID])
WHERE IsSynchronized = 0;
The overhead of maintaining this index may very well be worth the space savings compared to a full index, especially if it's only being used to optimize this query (or query pattern, at least).
Filtered indexes are not magic, though; JNK brought up some limitations below:
Caveats with filtered indexes - stats may not stay up to date without maintenance, and you need to use "standard" values for some settings like QUOTED IDENTIFIER and ANSI NULLS. These are small issues but if you have the settings wrong in a session that inserts into the index, the insert will fail.
Also you'll want to read these posts:
If you don't want to use a filtered index, you can probably test variations of these:
CREATE NONCLUSTERED INDEX [IX_Covering_try1] ON [dbo].[PackageEvents]
([PackageID], IsSynchronized)
INCLUDE ([EventDate], [EventDescription], [EventID],
[LastSyncDate], [Notes], [UserName], [Version]);
CREATE NONCLUSTERED INDEX [IX_Covering_try2] ON [dbo].[PackageEvents]
(IsSynchronized, [PackageID])
INCLUDE ([EventDate], [EventDescription], [EventID],
[LastSyncDate], [Notes], [UserName], [Version]);
(For a long time I thought that including BIT columns in the key was wasteful but Martin Smith demonstrated a case where it worked quite well - worth a try. I can't find the post now.)
Without your full schema, data, query patterns etc. we can only guide you and have you test our suggestions in your environment. We can't say, "Ding! This is the one that will work for you!"
I guess you are comparing the estimated costs for the queries. Those are just estimates based on (among other things) the estimated number of rows returned by the query. Not the actual number of rows.
Your first query estimated that it would return 30 rows and your second query estimated 1000 rows. That is where your difference in query cost comes from.
If you change the queries to fetch only 30 rows you will see that the estimated rows is the same for the queries and the first query actually is costed a bit higher, at least for me in SQL Server 2014.
Don't use the estimates when comparing performance of queries. Use things like duration, number of reads and size of memory grants instead.
Best Answer
I would use extended events for this to break down the wait stats per query. You can then see how much of it was waiting on "Network_IO" vs. mostly everything else. There are a lot of wait stats and most of them would be server side work but not all so make sure you review all the other significant wait stats and also the time it took to come up with the query plan.
I'd recommend either downloading a trial of the SQL Sentry Plan Explorer and put your query in there. It'll tell you right away how long it took, you'll want to do this from the client machine.
This is the link to SQL Sentry Plan Explorer. This is super easy to use and doesn't require any coding. Simply install the free product on the client machine you want to test from, activate the free pro mode for 2 weeks (I would recommend buying this tool, it's great), then look at your wait stats.
This one will provide you with the wait stats per query (From Paul's blog):
This script from Paul Randall will provide you all the wait stats over a period of time: