Determining precision and scale resulting from expressions is a rat's nest and I don't think anyone understands the exact rules in every scenario, especially when mixing decimal (or float!) and int. See this answer by gbn.
You can of course tailor the expressions to give you what you want by making much more verbose explicit conversions. This is probably overkill but:
SELECT
CONVERT(DECIMAL(15,6), CONVERT(DECIMAL(15,6), 0.15)
* CONVERT(DECIMAL(15,6), 30)
/ CONVERT(DECIMAL(15,6), 360)),
CONVERT(DECIMAL(15,6), CONVERT(DECIMAL(15,6), 0.15)
/ CONVERT(DECIMAL(15,6), 360)
* CONVERT(DECIMAL(15,6), 30));
Neither result is rounded wrongly due to broken floating point math or wildly wrong precision/scale.
0.012500 0.012500
Below are the field names and datatypes of each of the result sets as reported by SQL Server (I tested on SQL Server 2012, 2014, and 2016, though I would expect these to remain consistent across SQL Server versions):
DBCC SHOW_STATISTICS() WITH STAT_HEADER
[Name] NVARCHAR(128)
[Updated] NVARCHAR(20)
[Rows] BIGINT
[Rows Sampled] BIGINT
[Steps] SMALLINT
[Density] REAL
[Average key length] REAL
[String Index] NCHAR(3)
[Filter Expression] NVARCHAR(MAX)
[Unfiltered Rows] BIGINT
DBCC SHOW_STATISTICS() WITH DENSITY_VECTOR
[All density] REAL
[Average Length] REAL
[Columns] NVARCHAR(4000)
DBCC SHOW_STATISTICS() WITH HISTOGRAM
[RANGE_HI_KEY] INT -- Datatype varies based on datatype of first key column
[RANGE_ROWS] REAL
[EQ_ROWS] REAL
[DISTINCT_RANGE_ROWS] BIGINT
[AVG_RANGE_ROWS] REAL
To be clear, these datatypes are the result set datatypes from the DBCC
commands; there is no indication of the actual datatypes used to store that info. However, it seems reasonable to assume that the datatypes of the result set are the same as those used the store the info.
Also, for anyone curious as to how I was able to determine these datatypes: I got the info from a SQLCLR stored procedure I created as part of the SQL# library. It is called DB_DescribeResultSets and is similar to sys.dm_exec_describe_first_result_set but handles multiple result sets, temp tables, dynamic SQL, etc (because it runs the query instead of just parsing it). The DB_DescribeResultSets stored procedure is not available in the Free version, though. However, you could do the same thing in any .NET app. It just requires running a query through a SqlDataReader
and then using the GetSchemaTable method to get the result set schema.
I ran DB_DescribeResultSets as follows:
EXEC SQL#.DB_DescribeResultSets
@TheQuery = N'DBCC SHOW_STATISTICS(N''msdb.dbo.sysjobs'',
nc1);',
@RowNumberToGetValuesFrom = 1,
@ResultSetNumberToDescribe = 0, -- 0 = all result sets
@ShowHiddenFields = 1,
@ResultSet = ''; -- this is an XML OUTPUT param and can't have a default in SQLCLR
Best Answer
Precision and Scale are based on the mathematical concept of Significant Figures. You're not alone in that the current definition/usage for these terms is confusing.
First, you must remember that everything in computers is represented in binary; however not all numbers are easily represented using binary syntax, specifically when it comes to certain fractions. One such fraction, ⅓, is actually impossible to represent in a computer. Even with the largest amount of memory possible, a computer will never be able to accurately represent ⅓ in binary form. However, it can get more precise (e.g. close) to ⅓ with the more bits (e.g. and therefore decimal places) that are thrown at it.
Precision can therefore be thought of as how exact a number you want to represent. The more digits the more precise that number becomes.
Since Precision is a measure of how exact the number you wish to represent is and we know we're limited in how much we can store, you have to sacrifice precision for scale. So basically the smaller the scale the larger the final value can become.
Scale may best be explained with how we look at maps. A map of a state or province shows a much larger scale of area with less detail, whereas a map of a city shows a smaller area at greater detail.
Scale can therefore be thought of as the amount of detail. The more detail we want to convey comes with the cost of showing less actual stuff.
I don't know if that helps you at all, but hopefully that helps provide some context behind the definitions of these terms.