Our developers (prior to me being DBA) set the data type for a field to VARCHAR(MAX)
when in reality it only needs to be VARCHAR(255)
. This wasn't discovered until after our table had grown to over several million records and almost a terabyte of data. When I go to design mode on that table to test this change in our test environment I find that the script it creates makes a _tmp table copy of the table and then drops the original and renames _tmp to the original. I seem to remember learning the reason behind this had something to do with having to move the data from the "LOB" section of a record to the fixed length section of a record (Or so my memory recalls). However, I can't find any documentation on why exactly this occurs (instead of just changing the size in place). Can someone point me in the right direction so I can better explain to management why making this change now on a several million record table is going to take a long time.
SQL Server – Changing varchar(max) to varchar(255) Requires Table Rebuild
Architecturedatatypessql server
Related Question
- Sql-server – Moving data from table with VARCHAR(50) fields to table with numeric fields increases table size
- Sql-server – Fragmented clustered primary key (sequential GUID) index after processing – SQL Server
- Sql-server – Maximizing Availability on a Table When Changing Data Type
- SQL Server 2016 – Conditions for Memory Optimized Tables Stored in Off-Row Data Table
- SQL Server 2014 – Implicit Conversion of VARCHAR to NVARCHAR and Table Scan
- SQL Server – Improving Query Performance with VarChar Comparisons
Best Answer
The script that SSMS generates is not the best one.
You should be able to use simple
ALTER TABLE
:Obviously, make sure in advance that the text that you store in this column would fit into 255 characters limit.
I'd strongly recommend to try it in test environment first to get an idea how long it would take. I'm not sure, and it may depend on the version and edition of your server, but it is likely that this kind of change would be performed not as a metadata change, which means that for a terabyte of data it would take a significant amount of time.
This question (Why does ALTER COLUMN to NOT NULL cause massive log file growth?) has a lot of details about
ALTER COLUMN
.