If I add or drop columns from a SQL Server table I presume I get page splits or gaps. Since the size of the row has changed.
When I use RedGate SQL Compare to create conversion scripts its strategy is to create a temporary table, copy all the data into that table, drop the old table, and then rename the temporary table.
I assume this cleans up the pages as all the rows were "perfectly" sequentially inserted.
I recently had a DBA tell me that this "copy and rename" approach is inefficient, expensive and unnecessary.
What are the merits of these two approaches?
Best Answer
I recommend you read SQL Server table columns under the hood. You'll see that many column DDL operations result in having 'ghost' columns in the table, physical columns that are existing in the table but not visible to the user. A rebuild will remove all these ghost columns.
Most times this is benign, but there are some dark areas that can lead to the issues described in KB2504090:
I for one shun all schema compare tools and comparison diff based deployment/upgrades. There simply is no one-size-fits-all correct approach with regard to changing schema. Just like Henrik mentions, I've been burned by the 500GB table 'copy', tyvm, no more diffs for me. Instead I recommend migrations, coded as SQL scripts and tested on relevant data size before being deployed. See Version Control and your Database. Rails ActiveRecord Migrations really grok this.