Sql-server – SSDT Drop and Recreate Tables when nothing has changed

deploymentsql serverssdtvisual studio

We have a Visual Studio Database Project consisting of about 129 Tables. It is the primary database for our Internal Web Based CRM/Call Centre product, which is still under active development.

We use the SSDT Publish from within VS to deploy to instances as changes are made. We develop locally via SQL Express (2016), also have a LAB environment for performance and load tests running SQL 2014, a UAT environment running 2012 and finally a deploy to production which is running SQL 2016.

All environments (Except Production) the script generated on publish is very good, only does the changes. The production script does a massive amount more work. Seems to drop and recreate a lot more tables, that I know have not changed (37 tables last deploy). Some of these tables have rows in the millions, and the whole publish is taking upwards of 25mins.

If I repeat the publish to production, it again drops and recreates 37 tables. The production DB does have replication which I have to disable before deployments (unsure if that's a factor).

I don't understand what the Production publish always wants to drop and recreate tables even though nothing has changed. I'm hoping to get some advice as to where to look to establish why SSDT thinks these need to be re-created.

using Visual Studio Professional 2017 V 15.5.5 and SSDT 15.1

Best Answer

After a fair amount of frustration, finally found the cause, and posting as an answer as it may help others....

Finding the reason for why SSDT thought the scheme was different I think is the first point of call. SQL Scheme compare is your friend here. (In VS under Tools => SQL => New Scheme Comparison...)

Firstly thanks to @jadarnel27, was very helpful so upvote dude cheers, but wasn't the answer I'm afraid. Computed column definitions and constraint definitions were definitely candidates, as was potentially column ordering. Ironically I did have one issue with a computed column but it was not the definition, it was the fact that I didn't add NOT NULL to the end of the column definition (which is the default of course) but SSDT saw that as different everytime.

So I had to change a column from

[ColumnName] AS (CONCAT(Col1,' ',Col2)) PERSISTED

to

[ColumnName] AS (CONCAT(Col1,' ',Col2)) PERSISTED NOT NULL

then it stopped constantly dropping and re-creating the column. This showed up in Scheme Compare, not in the definition of the computed column.

Secondly, because we were using replication, SQL was adding NOT FOR REPLICATION to most of the tables that were being used in replication (reasons for is to do with the works of replication) but in essence was changing a tables DDL from

CREATE TABLE [dbo].[Whatever]
(
    [WhateverId]    INT IDENTITY(1,1) NOT NULL
    ...etc more columns
)

to

CREATE TABLE [dbo].[Whatever]
(
    [WhateverId]    INT IDENTITY(1,1) NOT FOR REPLICATION NOT NULL
    ...etc more columns
)

again this showed up on Scheme compare......

and to fix, either add NOT FOR REPLICATION into the DDL source code in VS for all tables required or under the publish settings -> Advance -> Ignore tab and scroll down a bit, tick the box as below:SSDT Publish Advance Ignore Not For Replication

and then everything was disco