Per the SQL Server docs, it looks like if I have one or more varchar/nvarchar columns that cause row length to exceed 8KB, SQL Server will magically handle it.
Scenario:
– Table has 100 cols
– Max current row length is 3.4KB
– But if every varchar/nvarchar field was populated to max, the row length would hit 14kb
From how I interpret the Books Online details linked above, this scenario will work out OK (e.g. nothing will break and everything will work).
Is my assumption true?
Best Answer
Using the undocumented features
DBCC IND
andDBCC PAGE
, you can pretty easily tell exactly what SQL Server does with rows that exceed the "maximum" size of 8,060 bytes.To show how this works, I'll create a simple table in tempdb where the row width is 10,000 bytes, then run
DBCC IND
to get a list of pages where the data is stored. We'll then runDBCC PAGE
to see the content of those pages.First, we create the table with a maximum width of 10,000 bytes:
And populate that with a single row:
Here we're creating a table to hold the
DBCC IND
output, and populating that table:Finally this cursor iterates over the contents of the
dbo.IndexDetails
table showing the page contents for each page used by the table:The interesting bits of output from the cursor are displayed on the Messages tab in SSMS.
This is the first page. Notice the
Record Type = PRIMARY_RECORD
:I've removed the hexaxdecimal content of the actual page dump to keep my post under the 30,000 character limit. As you can see from the page dump above, the primary page in my example does not contain the first 3 columns,
A
,B
, andC
- they are stored in the row overflow region, shown next, withRecord Type = BLOB_FRAGMENT
: