I think my understanding of precision vs scale might be incorrect as the following example produces values that do not make sense to me. decimal(32, 14)
rounds the result to 6 decimal places, while the decimal(18, 14)
rounds to 19. My understanding of decimal is decimal(p, [s])
, where p
is the total number of digits and s
is the number of digits after the decimal (i.g., decimal(10, 2)
would result in 8 digits to the left of the decimal and 2 digits to the right). Is this not correct?
I created a small example that illustrates the seemingly odd behavior:
--------------------
-- Truncates at pipe
-- 1.043686|655...
--------------------
declare @dVal1 decimal(32, 14) = 10
declare @dVal2 decimal(32, 14) = 9.581419815465469
select @dVal1 Val1, @dVal2 Val2, @dVal1 / @dVal2 CalcResult
----------------
-- Most accurate
----------------
declare @dVal3 decimal(18, 14) = 10
declare @dVal4 decimal(18, 14) = 9.581419815465469
select @dVal3 Val3, @dVal4 Val4, @dVal3 / @dVal4 CalcResult
So on to the question, what is it that I am missing to understand this? The articles and msdn blogs I have read don't seem to provide clarity (at least to my thought process). Can someone explain to me why a higher precision seems to result in a loss of scale?
Best Answer
Your understanding is correct though you have 1 too many digits for
@dVal2
and@dVal4
which is why you see these being rounded up to 7 (9.58141981546547
) for the last digit in yourselect
As for the division rounding, it's hidden in the middle of the docs.
So in your first case
decimal(32,14)
, the scale is being set to 6 digits because the resulting valuedecimal(64,28)
has ascale = 28
which is > 6 and the integral part(64-28) = 36
is > 32 as defined in the last rule above. Thus,decimal(38,6)
In your second case
decimal(18,14)
, the first rule is being applied for your scale for the resulting value ofdecimal(36,28)
tomin(28, 38 -(36-28)) = min(28,30) = 28
. Thus,decimal(38,28)