Sql-server – Why does simple ALTER TABLE command take so long on table with full-text index

alter-tablesql serversql-server-2005

I have a large (~67 million rows) name-value table that has full-text indexing on the DataValue column.

If I try to run the following command:

ALTER TABLE VisitorData ADD NumericValue bit DEFAULT 0 NOT NULL;

It runs for 1 hour 10 minutes and still does not complete on a VisitorData table that contains ~67 million rows.

  1. Why is this taking so long and not completing?
  2. What can I do about it?

Here are more particulars about the table:

CREATE TABLE [dbo].[VisitorData](
            [VisitorID] [int] NOT NULL,
            [DataName] [varchar](80) NOT NULL,
            [DataValue] [nvarchar](3800) NOT NULL,
            [EncryptedDataValue] [varbinary](max) NULL,
            [VisitorDataID] [int] IDENTITY(1,1) NOT NULL, 
CONSTRAINT [PK_VisitorData_VisitorDataID] PRIMARY KEY CLUSTERED (
            [VisitorDataID] ASC
) WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY], 
CONSTRAINT [UNQ_VisitorData_VisitorId_DataName] UNIQUE NONCLUSTERED (
            [VisitorID] ASC,
            [DataName] ASC
) WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF,
        ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[VisitorData]
ADD  CONSTRAINT [UNQ_VisitorData_VisitorDataID] UNIQUE NONCLUSTERED (

[VisitorDataID] ASC
)
WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF,
      IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, 
      ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
GO

ALTER TABLE [dbo].[VisitorData]
    WITH CHECK ADD
        CONSTRAINT [FK_VisitorData_Visitors] FOREIGN KEY([VisitorID])
        REFERENCES [dbo].[Visitors] ([VisitorID])
GO

ALTER TABLE [dbo].[VisitorData]
    CHECK CONSTRAINT [FK_VisitorData_Visitors] GO

CREATE FULLTEXT CATALOG DBName_VisitorData_Catalog WITH ACCENT_SENSITIVITY = ON
CREATE FULLTEXT INDEX ON VisitorData ( DataValue Language 1033 )
    KEY INDEX UNQ_VisitorData_VisitorDataID
    ON DBName_VisitorData_Catalog
    WITH CHANGE_TRACKING AUTO
GO

The wait types that are occurring during the ALTER TABLE command are LCK_M_SCH_M (schema modification), as per the query results below:

select * from  sys.dm_os_waiting_tasks

waiting_task_address    session_id exec_context_id wait_duration_ms     wait_type            resource_address       blocking_task_address   blocking_session_id blocking_exec_context_id resource_description
--------------------             ----------     --------------- --------------------              -------------------- ------------------             ---------------------            -------------------        ------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0x0000000000B885C8   54               0                   112695                            LCK_M_SCH_M   0x00000000802DF600 0x000000000054E478     25                            0                                         objectlock lockPartition=0 objid=834102012 subresource=FULL dbid=5 id=lock438a02e80 mode=IS associatedObjectId=834102012
0x0000000000B885C8   54               0                   112695                            LCK_M_SCH_M   0x00000000802DF600 0x00000000088AB048    23                            0                                         objectlock lockPartition=0 objid=834102012 subresource=FULL dbid=5 id=lock438a02e80 mode=IS associatedObjectId=834102012

I'm working with production servers that are running SQL Server 2005 SP 2 (soon to be upgraded to 2008 SP2).

Best Answer

The schema change it taking so long because you are assigning a default value to the column during the change and enforcing that with a non-nullable column, and it has to populate the column for 60+ million rows, which is an incredibly expensive operation. I'm not sure what your application requirements are but an approach that would make the schema change faster is to add it in as a nullable column with no default value and then perform an update in batches to assign 0 as the value for the column. After your update are done then you can apply another schema change to change the column to non-nullable and assign the default value.