Sql-server – Maximizing Availability on a Table When Changing Data Type

azure-sql-databasemigrationsql server

I'm looking for some advice on the best way to change the data type on a column from an INT to a BIGINT on a table with 220 million rows, as it has reached the upper limit of the INT data type.

The database is on SQL Azure DB, and there is a clustered index with a primary key constraint, a foreign key constraint, and a non-clustered index. The table the foreign key is from also has 3 non-clustered indexes which include the key.

What is the fastest/most efficient way to achieve this? Ideally in such a way as to minimize the length of time the table will be inaccessible.

I am planning to test by dropping the constraints and indexes, making the change, and recreating them, as well as moving the data into a new table and creating the appropriate constraints and indexes there, but I wanted to find out if anyone could offer some advice on this first. I've seen a few ideas online such as moving data in batches, but I'm not sure what will work best. The table itself is only about 12GB, but I tested the whole operation on a local copy of the database and it generated 150GB of log.

Best Answer

You didn't say whether this column was part of any of the FKEY or indexes. Assuming it is not, the simplest way to achieve this is to add a new bigint column and update the data there, and then prepare a script to apply the changes to other affected objects. Potentially, the changes could be done in one transaction and have near-zero downtime.