I'm in the process of evaluating a schema change on the primary keys of several of our biggest tables in our software.
Till now we used varchar(36) for all of our PKs and FKs and Hibernate generated the random GUIDs in insert operations.
Moreover SQL server just makes all those columns clustered indexes also, so I'm worried now (after researching about) for the index fragmentation that all those inserts (and we have a lot of inserts and deletes and updates) will cause to the indexes.
A query made on the tables showed that almost all indexes are heavily fragmented.
So I decided to change the column type for several of our biggest tables (between 3 million and 1 million, so not that big) from varchar(36) to unique identifier with the DEFAULT option newsequentialid().
I did a test yesterday on a much smaller db: after changing those PKs and after a big processing (about 20k inserts and I think the same amount of deletes) it shows an index fragmentation for the clustered keys (id) from 30% to 40%.
How is this possible?
Of course I rebuilt the indexes before the processing.
These are the fragmentation data of ax example table (46528 rows) -> 42% fragmentation:
Schema Table Index avg_fragmentation_in_percent page_count fragment_count
dbo e2_per_out_presenze PK_e2_per_out_presenze_id 42,91 3714 1704
dbo e2_per_out_presenze FK_e2_per_out_presenze_2 1,78 448 36
dbo e2_per_out_presenze FK_e2_per_out_presenze_3 0 194 11
dbo e2_per_out_presenze FK_e2_per_out_presenze_4 0 444 12
dbo e2_per_out_presenze FK_e2_per_out_presenze_6 0 161 16
dbo e2_per_out_presenze FK_e2_per_out_presenze_7 0 166 14
dbo e2_per_out_presenze Index_8 0 466 28
Now I'm worried that this schema change won't help that much.
EDIT: I think it's important to note that the fill factor on the tables is set to 80 and that after the insert it's very possible that the row gets also updated.
For the sake of the test I also include the fragmentation data of the table after the same test but without the sequential GUID (it shows anyway a big improvement in fragmentation, but I wonder how will it perform on bigger tables)
Schema Table Index avg_fragmentation_in_percent page_count fragment_count
dbo e2_per_out_presenze FK_e2_per_out_presenze_7 95,03676471 544 524
dbo e2_per_out_presenze FK_e2_per_out_presenze_3 94,43413729 539 515
dbo e2_per_out_presenze FK_e2_per_out_presenze_6 93,56060606 528 501
dbo e2_per_out_presenze FK_e2_per_out_presenze_2 92,22343921 913 854
dbo e2_per_out_presenze PK_e2_per_out_presenze_id 86,9864487 4649 4102
dbo e2_per_out_presenze FK_e2_per_out_presenze_4 68,6827957 744 522
dbo e2_per_out_presenze Index_8 0 501 21
Best Answer
Changing the column from a
varchar(36)
to auniqueidentifier
will greatly reduce storage requirements from 36 bytes per row to 16 bytes per row; for each 100,000 row table, that equates to a savings of 2MB. 2MB may not sound like much by itself, but since the column is the primary key, that 2MB applies to each non-clustered index on the table. If any other tables are using foreign keys to ensure relational integrity, the space savings applies to those tables as well. Realize also that this space savings doesn't only apply to on-disk storage, it also applies equally, and one might argue, more importantly, to rows contained in the buffer-pool.It appears from a cursory glance at your sample table above that the average row size is somewhere around 600 bytes. Converting the
varchar(36)
to auniqueidentifier
will immediately offer a 3.3% space savings for the clustered index, and more than that for each non-clustered index.As a bonus, once you convert the column from a
varchar(36)
to auniqueidentifier
, it will no longer be possible to insertthis is a bad key
into the column, as is currently possible. This will result in much better data integrity, and will potentially save you a lot of work down the road.Having said that, I decided to try to recreate the problems you're seeing with index fragmentation. I created a test table with the
varchar(36)
primary key column with 100,000 rows. I then converted the column to auniqueidentifier
and was unable to see a drastic increase in index fragmentation. My test bed code is:The fragmentation query results:
The fragmentation details after the conversion, with deleting and inserting rows:
As you can see, after the conversion, and deleting 10,000 rows, then inserting 100,000 rows, index fragmentation for the primary key index (the table itself) is just 0.41%; hardly a concern.
If you need further help with this problem (I realize your question is a year old), you need to show the DDL for the table, its indexes, and the code you used for the conversion test.