First of all, as you are getting sub-second performance how much time do you want to spend on this? If you spend an hour on it, and you tune it to half a second, how long will it take to pay for itself? If this query runs several times a second, then I'm with you, let's tune it. Otherwise it's not really worth it.
Having said that, there are a number of warning signs in your query and plan which need closer attention, if you have time. This is a strange looking query - is it dynamically generated?
1) Rowcount Discrepancy
The dbo.IGroupes table is showing Estimated Rows (7) * Estimated Executions (100.99), so ~707 rows, versus 630,346 Actual Rows. Rowcounts which are more of an order of magnitude difference tend to be a bit of a warning sign. So why do you think this is? What is the relationship between the dbo.Documents table and the dbo.IGroupes table? Should you be getting a distinct list from here instead? This would also mean the estimated cost for this operator is way off too. There are other rowcount discrepancies and Plan Explorer does a good job of showing this:
2) Optimizer Timeout
If you click the SELECT operator in the plan and look at the properties, you will see the property "Reason for Early Termination of Statement Optimization" marked as "Time Out". This means the optimizer ran out of "time" (more like iterations than time) and returned with the lowest estimated cost plan it had at the time. This is generally an indication the query is too complicated and the workaround is to simplify. Looking at your query you have the dummy 1=1 clause and a number of other OR clauses. Can this be simplified?
A few other points: you should be aware that if you have the "Include Actual Execution Plan" option on, this negatively affects the values from SET STATISTICS TIME ON. This is the Observer Effect. Are your values from a run with the plan on or off?
One of my normal approaches to query tuning is to break it up. This helps identify true bottlenecks. It may be that later on I put the query back together. Here's one example, but it's worth noting in my test rig, there's no real performance difference between the original query and this one:
IF OBJECT_ID('tempdb..#tmp') IS NOT NULL DROP TABLE #tmp
SELECT
IDENTITY( INT, 1, 1 ) rn
, CAST( d.DocumentID AS INT ) DocumentID
, d.IsReEfiled
, d.IGroupID
, d.ITypeID
, d.RecordingDateTime
, d.CreatedByAccountID
, d.JurisdictionID
, d.LastStatusChangedDateTime
, d.IDate
, d.InstrumentID
, d.DocumentStatusID
, d.DocumentDate
, d.UserId
INTO #tmp
FROM Documents d
WHERE ( d.DocumentStatusID = 9 )
AND ( d.CreatedByAccountID = @0 OR d.DocumentStatusID = @1 OR d.DocumentStatusID = @2 )
AND ( d.IGroupID = @3 Or d.IGroupID = @4 )
AND (d.JurisdictionID = @5 Or d.DocumentStatusID = @6 Or d.DocumentStatusID = @7 )
ORDER BY d.LastStatusChangedDateTime DESC
SELECT
t.DocumentID
, t.IsReEfiled
, t.IGroupID
, t.ITypeID
, t.RecordingDateTime
, t.CreatedByAccountID
, t.JurisdictionID
, t.LastStatusChangedDateTime
, t.IDate
, t.InstrumentID
, t.DocumentStatusID
, ig.Abbreviation AS IGroupAbbreviation
, u.Username
, j.JDAbbreviation
, inf.DocumentName
, it.Abbreviation AS ITypeAbbreviation
, t.DocumentDate
, it.Abbreviation AS DocumentStatusAbbreviation
, ds.Name AS DocumentStatusName
, ( SELECT COUNT(*) FROM #tmp ) AS TotalRecords
FROM #tmp t
INNER JOIN dbo.ITypes it ON t.ITypeID = it.ITypeID
INNER JOIN dbo.Users u ON t.UserID = u.UserID
INNER JOIN dbo.IGroupes ig ON t.IGroupID = ig.IGroupID
INNER JOIN dbo.DocumentStatuses ds On t.DocumentStatusID = ds.DocumentStatusID
INNER JOIN dbo.Jurisdictions j on t.JurisdictionID = j.JurisdictionID
LEFT JOIN dbo.InstrumentFiles inf On t.DocumentID = inf.DocumentID
WHERE rn > @8 AND rn <= @9
ORDER BY rn
GO
If you want to carry on, please provide the schema and indexes from your tables, plus either the stats or an idea of the rowcounts.
Decimal and whole numbers are encoded very differently in varbinary. Decimals need more space. Try:
SELECT CONVERT(VARBINARY(32), 1), CONVERT(VARBINARY(32), 1.0);
As for your ultimate goal, storing whole numbers as varbinary to save space, I think you've answered that question yourself - not worth it.
Best Answer
No - it is NOT your SQL!
To answer this, I did the following:
then,
gives:
And then ran:
And got the result:
Which is what you'd expect! Quite why one would want to insert the string literal
NULL
is beyond me - Hmm... Developers: "three-value logic, my head is melted - I'll just stick in the string and we can do anOR
!". Or, was this system in Oracle at one point? Empty string and NULL string being the same thing?So, whatever your problem is, it is not your SQL as shown above causing this problem! Check the SQLFiddle here (note: using MS SQL Server 2014!).
As for your point about it being a version bug - I find it difficult to believe that such a basic functionality would be buggy in a 2012 Server version. Such a bug would have broken loads of legacy code!
My hunch is that the problem lies somewhere with the XML part of the code and the bits we can't see. I would set up logging (of all queries) and check out the SQL text that is actually being sent to the server (on my test system).