Why the the optimiser doesn't go for your your first index:
CREATE NONCLUSTERED INDEX [CommonQueryIndex] ON [dbo].[Heartbeats]
(
[DateEntered] ASC,
[DeviceID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
Is a matter of selectivity of the [DateEntered] Column.
You told us that your table has 44 million rows. the row size is:
4 bytes, for the ID, 4 bytes for the Device ID, 8 bytes for the date, and 1 byte for the 4 bit columns. that's 17 bytes + 7 bytes overhead for (tags, Null bitmap, var col offset,,col count) totals 24 Bytes per row.
That would rougly translate to 140k pages. To store those 44 million rows.
Now the optimiser can do two things:
- It could scan the table (clustered index scan)
- Or it could use your index. For every row in your index, it would
then need to do a bookmark lookup in the clustered index.
Now at a certain point it just becomes more expensive to do all these single lookups in the clustered index for every index entry found in your non clustered index. The threshold for that is generally the total count of lookups should exceed 25% tot 33% of the total table page count.
So in this case: 140k/25%=35000 rows 140k/33%=46666 rows.
(@RBarryYoung, 35k is 0.08% of the total rows and 46666 is 0.10 %, so I think that is where the confusion was)
So if your where clause will result in somewhere between 35000 and 46666 rows.(this is underneath the top clause!) It's very likely that your non clustered will not be used and that the clustered index scan will be used.
The only two ways to change this are:
- Make your where clause more selective. (if possible)
- Drop the * and select only a few columns so you can use a covering
index.
now sure you could create a covering index even when you use a select *. Hoever that just creates a massive overhead for your inserts/updates/deletes. We would have to know more about your work load (read vs write) to make sure if that's the best solution.
Changing from datetime to smalldatetime is a 16% reducion in size on clustered index and a 24% reduction in size on your non clustered index.
You cannot use a WITH (NOLOCK)
on a Table-Valued Function, unless you put it on every single table in the code inside the function.
Your best bet would be, like you said, to SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
.
To change this back to the default, you need to find out what isolation level is currently set (before changing it above).
This can be done by running DBCC USEROPTIONS
and checking the value for isolation level
.
The default tends to be READ COMMITTED
, so to change it to that, you'll want to write a statement like the following:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
Best Answer
In practice, yes (although not documented as such), but also useless. If your data never changes, then using dirty reads won't cause any observable behavior or material performance difference.
No. Absolutely not. The query plan to read the target rows may read changing data structures without proper concurrency controls. So you are back to hoping that your queries don't return bizarre, incorrect results or simply fail.