Where pages are used for internal purposes like sort runs, the maximum row size is 8094 bytes. For data pages, the maximum in-row size including internal row overhead is 8060 bytes.
Internal row overhead can expand significantly if certain engine features are in use. For example, using sparse columns reduces the user-accessible data size to 8019 bytes.
The only example of external row overhead I know of up to SQL Server 2012 is the 14 bytes needed for versioned rows. This external overhead brings the maximum space usage for a single row to 8074 bytes, plus 2 bytes for the single slot array entry, making 8076 bytes total. This is still 20 bytes short of the 8096 limit (8192 page size - 96 byte fixed header).
The most likely explanation is that the original 8060 byte limit left 34 bytes for future expansion, of which 14 were used for the row-versioning implementation.
Add a persistent calculated field that contains a CHECKSUM
on the 5 fields, and use that to perform the comparisons.
The CHECKSUM
field will be unique for that specific combination of fields, and is stored as an INT
that results in a much easier target for comparisons in a WHERE
clause.
USE tempdb; /* create this in tempdb since it is just a demo */
CREATE TABLE dbo.t1
(
Id bigint constraint PK_t1 primary key clustered identity(1,1)
, Sequence int
, Parent int not null constraint df_T1_Parent DEFAULT ((0))
, Data1 varchar(20)
, Data2 varchar(20)
, Data3 varchar(20)
, Data4 varchar(20)
, Data5 varchar(20)
, CK AS CHECKSUM(Data1, Data2, Data3, Data4, Data5) PERSISTED
);
GO
INSERT INTO dbo.t1 (Sequence, Parent, Data1, Data2, Data3, Data4, Data5)
VALUES (1,1,'test','test2','test3','test4','test5');
SELECT *
FROM dbo.t1;
GO
/* this row will NOT get inserted since it already exists in dbo.t1 */
INSERT INTO dbo.t1 (Sequence, Parent, Data1, Data2, Data3, Data4, Data5)
SELECT 2, 3, 'test', 'test2', 'test3', 'test4', 'test5'
WHERE Checksum('test','test2','test3','test4','test5') NOT IN (SELECT CK FROM t1);
/* still only shows the original row, since the checksum for the row already
exists in dbo.t1 */
SELECT *
FROM dbo.t1;
In order to support a large number of rows, you'd want to create an NON-UNIQUE index on the CK
field.
By the way, you neglected to mention the number of rows you are expecting in this table; that information would be instrumental in making great recommendations.
In-row data is limited to a maximum of 8060 bytes, which is the size of a single page of data, less the required overhead for each page. Any single row larger than that will result in some off-page storage of row data. I'm certain other contributors to http://dba.stackexchange.com can give you a much more concise definition of the engine internals regarding storage of large rows. How big is your largest row, presently?
If items in Data1, Data2, Data3...
have the same values occurring in a different order, the checksum will be different, so you may want to take that into consideration.
Following a brief discussion with the fantastic Mark Storey-Smith on The Heap, I'd like to offer a similar, although potentially better choice for calculating a hash on the fields in question. You could alternately use the HASHBYTES()
function in the calculated column. HASHBYTES()
has some gotchas, such as the necessity to concatenate your fields together, including some type of delimiter between the field values, in order to pass HASHBYTES()
a single value. For more information about HASHBYTES()
, Mark recommended this site. Clearly, MSDN also has some great info at http://msdn.microsoft.com/en-us/library/ms174415.aspx
Best Answer
One important thing to keep in mind is that rowstore tables have a minimum row size of 9 bytes. You can see some details about that in the answer and comments here. If you're going to be creating sample data and digging around in pages I recommend creating at least a few columns to make what you're looking at more clear. Otherwise you can run into a case where a table with a single
TINYINT
column appears to take up as much space as a table with a singleSMALLINT
, which doesn't make sense at first.