The problem is in the expression:
ActualCommission =
CASE
WHEN ISNULL(sb.CommissionPercent, 0) = 0
THEN CAST(((sm.MarkupPercent - 100) / sm.MarkupPercent) * 100 AS decimal(18, 2))
ELSE ROUND(ISNULL(sb.CommissionPercent, 0) - (100.0 - sm.MarkupPercent), 2)
END
A divide-by-zero error could occur for any row where CommissionPercent = 0
and MarkupPercent = 0
. Whether the error actually occurs in practice depends on the order of evaluation at runtime, which in turn depends on the shape of the execution plan and the detailed behaviours of the individual plan operators.
For example, some plan shapes might physically eliminate the problematic rows before the expression is evaluated. Others might defer evaluation of the expression until the result is required by another operator later in the plan. There are many possibilities and subtleties here, leading to the apparently 'strange' behaviours you observe.
In general, SQL Server makes no guarantees about the order of evaluation of scalar expressions or the number of times a particular expression might be evaluated at runtime. The exception to this is the CASE
statement, which does provide guarantees about the order of evaluation in most circumstances. From the documentation:
The CASE statement evaluates its conditions sequentially and stops
with the first condition whose condition is satisfied. In some
situations, an expression is evaluated before a CASE statement
receives the results of the expression as its input. Errors in
evaluating these expressions are possible. Aggregate expressions that
appear in WHEN arguments to a CASE statement are evaluated first, then
provided to the CASE statement.
The way to protect against the error is to modify the CASE
expression so the divide-by-zero is explicitly handled using an earlier WHEN
clause. How you choose to account for rows with CommissionPercent = 0
and MarkupPercent = 0
is something only have the information to decide about. That said, the behaviour with ARITHABORT
set to OFF
is to return a NULL
. The following change to the CASE
statement produces that result with ARITHABORT
set to the highly-recommended ON
:
ActualCommission =
CASE
-- New
WHEN (sb.CommissionPercent = 0 OR sb.CommissionPercent IS NULL) AND sm.MarkupPercent = 0
THEN NULL
WHEN ISNULL(sb.CommissionPercent, 0) = 0
THEN CAST(((sm.MarkupPercent - 100) / sm.MarkupPercent) * 100 AS decimal(18, 2))
ELSE ROUND(ISNULL(sb.CommissionPercent, 0) - (100.0 - sm.MarkupPercent), 2)
END
Explicitly accounting for the possibility of a divide-by-zero error is very much better than relying on the ARITHABORT setting. That link contains the following statement:
You should always set ARITHABORT to ON in your logon sessions. Setting
ARITHABORT to OFF can negatively impact query optimization leading to
performance issues.
ARITHABORT
is required to be ON
for many newer engine features (e.g. indexes on computed columns and indexed views). Most client libraries explicitly set ARITHABORT
to ON
when connecting, which overrides any database- or server- level setting.
My strong recommendation is to always use the recommended SET
settings and code defensively against error conditions.
SET ANSI_NULLS,
ANSI_PADDING,
ANSI_WARNINGS,
ARITHABORT,
CONCAT_NULL_YIELDS_NULL,
QUOTED_IDENTIFIER
ON;
SET NUMERIC_ROUNDABORT OFF;
One doesn’t use the min_valid_version to track the changes. This is only used to validate if your client has to be reinitialized, if the metadata has been cleaned up before client could consume the changes.
CHANGE_TRACKING_MIN_VALID_VERSION (Transact-SQL)
Obtains the minimum version that is valid for use in obtaining change tracking information from the specified table when you are using the CHANGETABLE
function.
Min_valid_version changes with the cleanup version and doesn’t depend on the changes to the user table. Every time cleanup thread runs there could be an update to min_valid_version irrespective of data changes.
Previous to 2012, min_valid_version was marked same as cleanup version, when in fact it should be one more than the cleanup version as metadata for that version has already been cleaned up. In 2012 that is what they changed to make sure they update the right min_valid_version.
One should not be tracking change using min_valid_version, instead should be saving the last_sync_version after every sync and call the CHANGETABLE
to enumerate the changes after the last sync version.
By design - Min valid version changes with the cleanup version and doesn’t depend on the changes to the user table. Every time cleanup thread runs there could be an update to min valid version irrespective of data changes.
Resolve - Change procedure to use 'current_version' instead of 'min_valid_version'
Best Answer
That is just documented behavior. I don't think anyone messed with the settings.
See data type precedence on MSDN.
As noted in the comments the empty string gets converted to 0 in any numeric type and to 1900-01-01 00:00:00.000 when converted to a date.
EDIT: I think your real problem is that your design is so that you have to join on fields of a different data type. The only way to get around this is to have a convert on your join clause which will hurt query performance. The main problem is probably with the schema design
EDIT: There was a lot of discussion in the comments that have been moved to chat. However illogical it may seem, converting an empty string to other data types produces arbitrary values.
This code:
Produces this output:
You could expect then that this behavior is consistent between other preceding datatypes and expect that converting 0 to a date would produce the same arbitrary value but it doesn't.
Produces
Because it's not a supported conversion
while
Returns
So yes, it's weird and arbitrary, but actually documented and explainable.