SQL Server Default Constraint – Is DEFAULT CONSTRAINT Worth It

database-designdefault valueperformancequery-performancesql servert-sql

I usually design my databases following next rules:

  • Nobody else than db_owner and sysadmin have access to the database tables.
  • User roles are controlled at application layer. I usually use one db role to grant access to the views, stored procedures and functions, but in some cases, I add a second rule to protect some stored procedures.
  • I use TRIGGERS to initially validate critical information.
CREATE TRIGGER <TriggerName>
ON <MyTable>
[BEFORE | AFTER] INSERT
AS
    IF EXISTS (SELECT 1 
               FROM   inserted
               WHERE  Field1 <> <some_initial_value>
               OR     Field2 <> <other_initial_value>)
    BEGIN
        UPDATE MyTable
        SET    Field1 = <some_initial_value>,  
               Field2 = <other_initial_value>  
        ...  
    END
  • DML is executed using stored procedures:
sp_MyTable_Insert(@Field1, @Field2, @Field3, ...);
sp_MyTable_Delete(@Key1, @Key2, ...);
sp_MyTable_Update(@Key1, @Key2, @Field3, ...);

Do you think that, on this scenario, worth it to use DEFAULT CONSTRAINTs, or I'm adding an extra and unnecessary job to the DB server?

Update

I understand that by using DEFAULT constraint I'm giving more information to someone else that must to administer the database. But I'm mostly interested on performance.

I assume that the database is checking always default values, even if I supply the correct value, hence I'm doing the same job twice.

For example, Is there a way to avoid DEFAULT constraint within a trigger execution?

Best Answer

I assume that the database is checking always default values, even if I supply the correct value, hence I'm doing the same job twice.

Um, why would you assume that? ;-). Given that Defaults exist to provide a value when the column that they are attached to is not present in the INSERT statement, I would assume the exact opposite: that they are completely ignored if the associated column is present in the INSERT statement.

Fortunately, neither of us needs to assume anything due to this statement in the question:

I'm mostly interested on performance.

Questions about performance are nearly always testable. So we just need to come up with a test to allow SQL Server (the true authority here) to answer this question.

SETUP

Run the following once:

SET NOCOUNT ON;

-- DROP TABLE #HasDefault;
CREATE TABLE #HasDefault
(
  [HasDefaultID] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
  [SomeInt] INT NULL,
  [SomeDate] DATETIME NOT NULL DEFAULT (GETDATE())
);

-- DROP TABLE #NoDefault;
CREATE TABLE #NoDefault
(
  [NoDefaultID] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
  [SomeInt] INT NULL,
  [SomeDate] DATETIME NOT NULL
);

-- make sure that data file and Tran Log file are grown, if need be, ahead of time:
INSERT INTO #HasDefault ([SomeInt])
  SELECT TOP (2000000) NULL
  FROM   [master].sys.[all_columns] ac1
  CROSS JOIN [master].sys.[all_columns] ac2;

Execute tests 1A and 1B individually, not together as that skews the timing. Run each one several times to get a sense of the average timing for each one.

Test 1A

TRUNCATE TABLE #HasDefault;
GO

PRINT '#HasDefault:';
SET STATISTICS TIME ON;
INSERT INTO #HasDefault ([SomeDate])
  SELECT TOP (1000000) '2017-05-15 10:11:12.000'
  FROM   [master].sys.[all_columns] ac1
  CROSS JOIN [master].sys.[all_columns] ac2;
SET STATISTICS TIME OFF;
GO

Test 1B

TRUNCATE TABLE #NoDefault;
GO

PRINT '#NoDefault:';
SET STATISTICS TIME ON;
INSERT INTO #NoDefault ([SomeDate])
  SELECT TOP (1000000) '2017-05-15 10:11:12.000'
  FROM   [master].sys.[all_columns] ac1
  CROSS JOIN [master].sys.[all_columns] ac2;
SET STATISTICS TIME OFF;
GO

Execute tests 2A and 2B individually, not together as that skews the timing. Run each one several times to get a sense of the average timing for each one.

Test 2A

TRUNCATE TABLE #HasDefault;
GO

DECLARE @Counter INT = 0,
        @StartTime DATETIME,
        @EndTime DATETIME;

BEGIN TRAN;
--SET STATISTICS TIME ON;
SET @StartTime = GETDATE();
WHILE (@Counter < 100000)
BEGIN
  INSERT INTO #HasDefault ([SomeDate]) VALUES ('2017-05-15 10:11:12.000');
  SET @Counter = @Counter + 1;
END;
SET @EndTime = GETDATE();
--SET STATISTICS TIME OFF;
COMMIT TRAN;
PRINT DATEDIFF(MILLISECOND, @StartTime, @EndTime);

Test 2B

TRUNCATE TABLE #NoDefault;
GO

DECLARE @Counter INT = 0,
        @StartTime DATETIME,
        @EndTime DATETIME;

BEGIN TRAN;
--SET STATISTICS TIME ON;
SET @StartTime = GETDATE();
WHILE (@Counter < 100000)
BEGIN
  INSERT INTO #NoDefault ([SomeDate]) VALUES ('2017-05-15 10:11:12.000');
  SET @Counter = @Counter + 1;
END;
SET @EndTime = GETDATE();
--SET STATISTICS TIME OFF;
COMMIT TRAN;
PRINT DATEDIFF(MILLISECOND, @StartTime, @EndTime);

You should see that there is no real difference in timing between tests 1A and 1B, or between tests 2A and 2B. So, no, there is no performance penalty to have a DEFAULT defined but not used.

Also, besides merely documenting intended behavior, you need to keep in mind that it is mostly you who cares about the DML statements being completely contained within your stored procedures. Support folks do not care. Future developers might not be aware of your desire to have all DML encapsulated within those stored procedures, or care even if they do know. And whoever maintains this DB after you are gone (either another project or job) might not care, or might not be able to prevent the use of an ORM no matter how much they protest. So, Defaults, can help in that they give folks an "out" when doing an INSERT, especially an ad-hoc INSERT done by a support rep, as that is one column they no need to include (which is why I always use defaults on audit date columns).


AND, it just occurred to me that it can be shown rather objectively whether or not a DEFAULT is checked when the associated column is present in the INSERT statement: simply provide an invalid value. The following test does just that:

-- DROP TABLE #BadDefault;
CREATE TABLE #BadDefault
(
  [BadDefaultID] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
  [SomeInt] INT NOT NULL DEFAULT (1 / 0)
);


INSERT INTO #BadDefault ([SomeInt]) VALUES (1234); -- Success!!!
SELECT * FROM #BadDefault; -- just to be sure ;-)



INSERT INTO #BadDefault ([SomeInt]) VALUES (DEFAULT); -- Error:
/*
Msg 8134, Level 16, State 1, Line xxxxx
Divide by zero error encountered.
The statement has been terminated.
*/
SELECT * FROM #BadDefault; -- just to be sure ;-)
GO

As you can see, when a column (and a value, not the keyword DEFAULT) is provided, the Default is 100% ignored. We know this because the INSERT succeeds. But if the Default is used, there is an error as it is finally being executed.


Is there a way to avoid DEFAULT constraint within a trigger execution?

While needing to avoid Default Constraints (at least in this context) is completely unnecessary, for the sake of completeness it can be noted that it would only be possible to "avoid" a Default Constraint within an INSTEAD OF Trigger, but not within an AFTER Trigger. According to the documentation for CREATE TRIGGER:

If constraints exist on the trigger table, they are checked after the INSTEAD OF trigger execution and before the AFTER trigger execution. If the constraints are violated, the INSTEAD OF trigger actions are rolled back and the AFTER trigger is not fired.

Of course, using an INSTEAD OF Trigger would require:

  1. Disabling the Default Constraint
  2. Creating an AFTER Trigger that enables the Constraint

However, I wouldn't exactly recommend doing this.