When I was trying to create a report that included a decimal-implied number column that needs to be a fixed length of six characters, I chose to go the route of converting to char, replacing the decimal with '', and then using RIGHT() to pad zeros on the left hand side, turning 15.52
into 001552
, 1.11
into 000111
, etc… My code looked like:
SELECT RIGHT( '000000' + REPLACE(CONVERT(CHAR(8), ISNULL([TotalModePremium], 0)), '.', ''), 6) AS TotalModePremium
FROM <Database>.<Schema>.<Table>
This created unexpected behavior where it would clip off my results, making a max length of 3 characters. 15.52
would become 552
. However, rows with 0
would become 00
–only two characters. When I would change the CHAR(8) to CHAR(7), 15.52
would become 1552
, but 0
would become 000
.
I then change the CHAR(n) to VARCHAR(n), and it now works as desired. 15.52
becomes 001552
, 0
becomes 000000
, etc…
My question is, why would increasing or decreasing the value in CHAR(n) create those sorts of results when combined with the RIGHT() function to pad zeros, but the use of VARCHAR() eliminates that behavior?
Best Answer
It's because of how the
CHAR
data type works.Take a look at this example:
Notice the length of
@d
is 8 when it'sCHAR
but 5 when it'sVARCHAR
.CHAR
is fixed width and thus padded with blanks.Once you add '000000' to it and remove the decimal, you've now got a 13 character string: '0000001552< blank>< blank>< blank>. When the
RIGHT
function gets just the right 6 characters, the result is 552< blank>< blank>< blank>.'000000' is
VARCHAR
.VARCHAR
+ yourCHAR
string will result in aVARCHAR
string but still includes the blanks from theCHAR
.VARCHAR
does not pad with blanks and thus avoids this < blank> issue.