Our database is SQL Server 2008 R2. We have some tables that have some varchar(500) columns that I want to switch to datetime2 or bigint. I can guarantee all the data in the columns to be switched are valid for the proper type. The column changes do affect indexes, but not keys.
While discussing with colleagues, we have come to two ways to approach the problem. Both these would be done through T-Sql scripts.
- Create a temp table via select into, drop the old table and recreate the table with the proper datatypes. Recreate the indexes.
- Alter the current table/data types via
ALTER TABLE x ALTER COLUMN Y datetime2
and then rebuild or recreate the indexes.
Because I am confident the data will convert cleanly, I am leaning towards #2. My colleague and a DBA friend prefer #1 but my colleague can't remember why they trained him that way. The DBA friend is on vacation so I didn't ask him why.
Can someone provide insight on which option they think is better and why? Ultimately it is my decision and I am wondering why #1 would be preferred over #2?
Best Answer
I recently did this in my organization wherein we wanted to handle a table with billion + rows.
All the credit for the idea goes to Aaron Bertrand and is from his blog post Trick Shots : Schema Switch-A-Roo
Test below process on a small table and get your self comfortable before doing it in PROD.
fake
andshadow
with authorizationdbo
.shadow
schema e.g.create table shadow.Correct_Table ...
shadow
schema table.shadow
schema.Switch the schemas (This is a metadata operation and is extremely fast)
Do a final check to see if everything went as planned. You should do a
select count(1) from dbo.Correct_table
Once step 7 is confirmed and you are happy, drop the
shadow.table
,shadow
schema andfake
schema as clean up.