SQL Server – Preventing SSDT Publishing from Dropping Columns

deploymentschemasql serversql-server-2008-r2ssdt

I want to create a publishing profile that DOES a complete schema comparison and publishing BUT does not drop any tables or any columns that have been removed between the old and new version.

I am aware of the BLOCK possible data loss option in the settings. AFAIK this stops the publishing process in case DATA will be lost. So the question is, will the schema upgrade process be interrupted (in the middle of nowhere?) or will it be completed but the affected tables will be excluded?

IF a table has added and removed columns, will the new columns be added to the schema although the blocking of dataloss option stops the schema update because of the potential data loss?

EDIT 09.08.2016:

I wanted to add those links to complement the question with additional information about the backgrounds:

Best Answer

There are three parts to this Question, so let's start with the "Block on Data Loss" option, dropped Tables, and then dropped Columns.

"Block on Potential Data Loss" Option

If you enable the "Block incremental deployment if data loss might occur" option:

  • on the "Debug" tab of "Project Properties"
  • using the /p:BlockOnPossibleDataLoss=True command-line option
  • specifying <BlockOnPossibleDataLoss>True</BlockOnPossibleDataLoss> in a publish config file

then SSDT will script out statements for each affected table, such as the following:

/*
The column [dbo].[Table1].[New2] is being dropped, data loss could occur.
*/
IF EXISTS (select top 1 1 from [dbo].[Table1])
    RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
GO

/*
Table [dbo].[DontDropMe] is being dropped.  Deployment will halt if the table contains data.
*/
IF EXISTS (select top 1 1 from [dbo].[DontDropMe])
    RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
GO

These statements are placed into the incremental** deployment / publish script ahead of all other changes. This causes the publish process to stop completely if any rows are found in any of those Tables. Hence, there is no such thing as a partial deployment when using the "Block on Data Loss" option.

** "Create" scripts always drop and recreate the Database, so there won't be any data to lose ;-).

Tables Dropped from the Model but still in the Target

By default, Objects (and Columns are not Objects) are not dropped from the Target. Objects that exist in the Target but are missing from the Model are only dropped if you:

  • enable the "DROP objects in target but not in project" option on the "Debug" tab of "Project Properties"
  • use the /p:DropObjectsNotInSource=True command-line option
  • specify <DropObjectsNotInSource>True</DropObjectsNotInSource> in a publish config file

If you are using the "Drop Objects not in Source" option but want to prevent the dropping of just Tables, you can exclude them by:

  • going to "Advanced Deployment Settings" (the "Advanced..." button on the "Debug" tab of "Project Properties"), then going to the "Drop" tab, and enabling the "Do not drop tables" option (in the scrollable list).
  • using the /p:DoNotDropObjectType=Tables command-line option
  • specifying <DoNotDropTables>True</DoNotDropTables> in a publish config file

Columns Dropped from the Model but still in the Target

This one is a bit trickier since Columns are not objects, and there is no specific deployment option to ignore this particular situation. Here are your options as best as I can tell:

  1. Ignore tables entirely. You can do this by:

    • going to "Advanced Deployment Settings" (the "Advanced..." button on the "Debug" tab of "Project Properties"), then going to the "Ignore" tab, and enabling the "Exclude tables" option (in the "Excluded Object Types" scrollable list).
    • using the /p:ExcludeObjectType=Tables command-line option
    • specifying <ExcludeTables>True</ExcludeTables> in a publish config file

    The downside here is that no new Columns or new Tables will be published. You can still manage those yourself by adding deployment scripts to the Project and setting their "Build Action" (a property of the SQL script) to "PreDeploy".

  2. Create a "Deployment Contributor", which is a means of writing code to the SSDT API, to control how the DDL generation

    This is a very flexible and powerful option, especially if your religion practices self-flagellation ;-).

    UPDATE: Even better, Ed Elliot published a blog post in which he details how to use a Deployment Contributor: Inside an SSDT Deployment Contributor. Even betterer, the example he used is to do just this: prevent the dropping of columns that are not in the Model. Even bestest, the example code has actually been incorporated into his AgileSqlClub SqlPackage Deployment Filter project, and the feature is described in the following blog post: Deployment Contributor KeepTableColumns Filter. And, if the feature doesn't do exactly what you need, then the source code is available and you can make any changes you want and recompile.

    ?

  3. Don't use SSDT for Schema changes. Just use it to push code (Stored Procedures, Functions, Triggers, SQLCLR Objects, etc). You can write your own schema deployment scripts and include them in the Project. You can have them included in the deployment scripts by setting their "Build Action" to "PreDeploy".

  4. File a Connect Suggestion to have an option added for "DoNotDropColumns" or something like that :-)