Sql-server – Change Tracking Bad Plans in SQL Server 2016

performancesql serversql-server-2016

Has anyone else experience bad plans when calling CHANGETABLE in SQL Server 2016?

I have an application that uses change tracking to update a cache. Application servers get the changes for multiple tables every second. The CHANGETABLE is called so often that usually returns just a few rows. This works great in 2008 R2 and 2012. When I tested 2016 I found that the CPU spiked and found that the plan is scanning change tracking internal tables where it used to do a seek.

If you want to see this behavior here are the steps to replicate it:

  1. Create a database and setup change tracking.
  2. Create a Table:

    CREATE TABLE [dbo].[ChangeTable_Test](
    [Id] [bigint] IDENTITY(1,1) NOT NULL,
    [Name] [varchar](50) NOT NULL,
    CONSTRAINT [PK_Test_ChangeTable_Id] PRIMARY KEY CLUSTERED 
    (
         [Id] ASC
     )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[ChangeTable_Test] ENABLE CHANGE_TRACKING 
    GO    
    INSERT INTO ChangeTable_Test (Name) VALUES ('Test')
    GO 1000
    
  3. Include the actual execution plan and run the following.

    SET STATISTICS IO ON;
    SET NOCOUNT OFF;
    ALTER DATABASE SCOPED CONFIGURATION SET QUERY_OPTIMIZER_HOTFIXES = OFF;
    GO
    ALTER DATABASE [Test] SET COMPATIBILITY_LEVEL = 110
    GO
    DECLARE @CurrentVersion INT;
    SELECT  @CurrentVersion=CHANGE_TRACKING_CURRENT_VERSION ()-1;
    SELECT * FROM CHANGETABLE(CHANGES ChangeTable_Test,@CurrentVersion) AS C
    GO
    ALTER DATABASE [Test] SET COMPATIBILITY_LEVEL = 130
    GO
    DECLARE @CurrentVersion INT;
    SELECT  @CurrentVersion=CHANGE_TRACKING_CURRENT_VERSION ()-1;
    SELECT * FROM CHANGETABLE(CHANGES ChangeTable_Test,@CurrentVersion) AS C
    

The above demo works for me on SQL Server 2016 SP1-CU4. I uploaded two plans to Paste the Plan. The first one is under COMPAT 110 and the second is under 130.

Microsoft support acknowledged the problem but do not plan to fix it. They suggested workarounds using a USE PLAN hint or Plan Guides. But it would make a maintenance nightmare for this product.

The funny thing is that 2014 and 2017 do not pick the bad plans and they use the new Cardinality Estimator.

Do you have any suggestions beside of the ones I mentioned?

Best Answer

Based on your question it sounds like you have a highly concurrent application that calls CHANGETABLE on a variety of tables in many different queries. Forcing each query plan to have a specific shape with a USE PLAN hint or a plan guide would be a Herculean effort. Changing your active database to have a compatibility level of 110 without query fixes may be a bad option for other reasons.

The suggestion in this answer could be difficult to implement but it feels like a better option than the ones listed above. The idea is to replace the call to CHANGETABLE with a stored procedure that inserts the data from CHANGETABLE into a temp table. Queries that need change data can use the temp table populated instead of CHANGETABLE directly. Your application code might look like this:

DECLARE @CurrentVersion INT = CHANGE_TRACKING_CURRENT_VERSION () - 1;

CREATE TABLE #changes (SYS_CHANGE_VERSION bigint NULL);

EXEC dbo.CHANGETABLE_API '#changes', 'TEST', 'dbo', 'ChangeTable_Test', @CurrentVersion;

-- run queries that need #changes temp table

DROP TABLE #changes;

Working around scoping issues requires a few tricks. The stored procedure has three parts:

  1. Add static columns to the temp table: SYS_CHANGE_CREATION_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, and SYS_CHANGE_CONTEXT
  2. Add dynamic primary key columns to the temp table. You can use sys.columns and other dmvs for this.
  3. Insert into the temp table from CHANGETABLE under the context of an empty database with 2012 compatibility level and whatever other settings you need to get good performance.

Below is a quick and dirty implementation of most of the above. It doesn't have any kind of error checking or protection against SQL injection attacks. Feel free to do whatever you want with the code:

CREATE OR ALTER PROCEDURE dbo.CHANGETABLE_API (
    @temp_table_name SYSNAME,
    @changes_database_name SYSNAME,
    @changes_schema_name SYSNAME,
    @changes_table_name SYSNAME,
    @last_sync_version BIGINT
)
AS
BEGIN
    DECLARE @sql_to_add_static_cols NVARCHAR(4000),
    @sql_to_add_table_cols NVARCHAR(4000),
    @sql_to_insert_rows NVARCHAR(4000);

    -- okay to hardcode these depending on how you call CHANGETABLE
    SET @sql_to_add_static_cols = N'ALTER TABLE '
    + QUOTENAME(@temp_table_name)
    + N' ADD '
    + N'SYS_CHANGE_CREATION_VERSION bigint NULL, '
    + N'SYS_CHANGE_OPERATION nchar(1) NULL, '
    + N'SYS_CHANGE_COLUMNS varbinary(4100) NULL, '
    + N'SYS_CHANGE_CONTEXT varbinary(128) NULL';

    EXEC (@sql_to_add_static_cols);

    -- this should be dynamic based on sys.columns and other dmvs
    SET @sql_to_add_table_cols = N'ALTER TABLE '
    + QUOTENAME(@temp_table_name)
    + N' ADD id BIGINT NOT NULL';

    EXEC (@sql_to_add_table_cols);

    -- key is to run the insert in a database with the settings that you need for a good query plan
    SET @sql_to_insert_rows = N'USE DB_2012_COMPAT; '
    + N'INSERT INTO '
    + QUOTENAME(@temp_table_name)
    + N' SELECT * FROM CHANGETABLE(CHANGES '
    + QUOTENAME(@changes_database_name)
    + N'.' + QUOTENAME(@changes_schema_name) 
    + N'.' + QUOTENAME(@changes_table_name)
    + N', ' + CAST(@last_sync_version AS NVARCHAR(30))
    + N') AS C';

    EXEC (@sql_to_insert_rows);
END;

When I call the stored procedure in a database with compatibility level 130 I get a query plan that only does seeks as desired:

good plan

In the same session if I run the following code:

DECLARE @CurrentVersion INT = CHANGE_TRACKING_CURRENT_VERSION ()-1;
SELECT * FROM CHANGETABLE(CHANGES ChangeTable_Test,@CurrentVersion) AS C
OPTION (RECOMPILE);

I get a scan which you wanted to avoid:

bad plan