patternsqlMajor
DEFAULT CONSTRAINT, worth it?
Viewed 0 times
defaultworthconstraint
Problem
I usually design my databases following next rules:
Do you think that, on this scenario, worth it to use DEFAULT CONSTRAINTs, or I'm adding an extra and unnecessary job to the DB server?
Update
I understand that by using DEFAULT constraint I'm giving more information to someone else that must to administer the database. But I'm mostly interested on performance.
I assume that the database is checking always default values, even if I supply the correct value, hence I'm doing the same job twice.
For example, Is there a way to avoid DEFAULT constraint within a trigger execution?
- Nobody else than db_owner and sysadmin have access to the database tables.
- User roles are controlled at application layer. I usually use one db role to grant access to the views, stored procedures and functions, but in some cases, I add a second rule to protect some stored procedures.
- I use TRIGGERS to initially validate critical information.
CREATE TRIGGER
ON
[BEFORE | AFTER] INSERT
AS
IF EXISTS (SELECT 1
FROM inserted
WHERE Field1 <>
OR Field2 <> )
BEGIN
UPDATE MyTable
SET Field1 = ,
Field2 =
...
END- DML is executed using stored procedures:
sp_MyTable_Insert(@Field1, @Field2, @Field3, ...);
sp_MyTable_Delete(@Key1, @Key2, ...);
sp_MyTable_Update(@Key1, @Key2, @Field3, ...);Do you think that, on this scenario, worth it to use DEFAULT CONSTRAINTs, or I'm adding an extra and unnecessary job to the DB server?
Update
I understand that by using DEFAULT constraint I'm giving more information to someone else that must to administer the database. But I'm mostly interested on performance.
I assume that the database is checking always default values, even if I supply the correct value, hence I'm doing the same job twice.
For example, Is there a way to avoid DEFAULT constraint within a trigger execution?
Solution
I assume that the database is checking always default values, even if I supply the correct value, hence I'm doing the same job twice.
Um, why would you assume that? ;-). Given that Defaults exist to provide a value when the column that they are attached to is not present in the
Fortunately, neither of us needs to assume anything due to this statement in the question:
I'm mostly interested on performance.
Questions about performance are nearly always testable. So we just need to come up with a test to allow SQL Server (the true authority here) to answer this question.
SETUP
Run the following once:
Execute tests 1A and 1B individually, not together as that skews the timing. Run each one several times to get a sense of the average timing for each one.
Test 1A
Test 1B
Execute tests 2A and 2B individually, not together as that skews the timing. Run each one several times to get a sense of the average timing for each one.
Test 2A
Test 2B
You should see that there is no real difference in timing between tests 1A and 1B, or between tests 2A and 2B. So, no, there is no performance penalty to have a
Also, besides merely documenting intended behavior, you need to keep in mind that it is mostly you who cares about the DML statements being completely contained within your stored procedures. Support folks do not care. Future developers might not be aware of your desire to have all DML encapsulated within those stored procedures, or care even if they do know. And whoever maintains this DB after you are gone (either another project or job) might not care, or might not be able to prevent the use of an ORM no matter how much they protest. So, Defaults, can help in that they give folks an "out" when doing an
AND, it just occurred to me that it can be shown rather objectively whether or not a
As you can see, when a column (and a value, not the keyword
Um, why would you assume that? ;-). Given that Defaults exist to provide a value when the column that they are attached to is not present in the
INSERT statement, I would assume the exact opposite: that they are completely ignored if the associated column is present in the INSERT statement.Fortunately, neither of us needs to assume anything due to this statement in the question:
I'm mostly interested on performance.
Questions about performance are nearly always testable. So we just need to come up with a test to allow SQL Server (the true authority here) to answer this question.
SETUP
Run the following once:
SET NOCOUNT ON;
-- DROP TABLE #HasDefault;
CREATE TABLE #HasDefault
(
[HasDefaultID] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
[SomeInt] INT NULL,
[SomeDate] DATETIME NOT NULL DEFAULT (GETDATE())
);
-- DROP TABLE #NoDefault;
CREATE TABLE #NoDefault
(
[NoDefaultID] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
[SomeInt] INT NULL,
[SomeDate] DATETIME NOT NULL
);
-- make sure that data file and Tran Log file are grown, if need be, ahead of time:
INSERT INTO #HasDefault ([SomeInt])
SELECT TOP (2000000) NULL
FROM [master].sys.[all_columns] ac1
CROSS JOIN [master].sys.[all_columns] ac2;Execute tests 1A and 1B individually, not together as that skews the timing. Run each one several times to get a sense of the average timing for each one.
Test 1A
TRUNCATE TABLE #HasDefault;
GO
PRINT '#HasDefault:';
SET STATISTICS TIME ON;
INSERT INTO #HasDefault ([SomeDate])
SELECT TOP (1000000) '2017-05-15 10:11:12.000'
FROM [master].sys.[all_columns] ac1
CROSS JOIN [master].sys.[all_columns] ac2;
SET STATISTICS TIME OFF;
GOTest 1B
TRUNCATE TABLE #NoDefault;
GO
PRINT '#NoDefault:';
SET STATISTICS TIME ON;
INSERT INTO #NoDefault ([SomeDate])
SELECT TOP (1000000) '2017-05-15 10:11:12.000'
FROM [master].sys.[all_columns] ac1
CROSS JOIN [master].sys.[all_columns] ac2;
SET STATISTICS TIME OFF;
GOExecute tests 2A and 2B individually, not together as that skews the timing. Run each one several times to get a sense of the average timing for each one.
Test 2A
TRUNCATE TABLE #HasDefault;
GO
DECLARE @Counter INT = 0,
@StartTime DATETIME,
@EndTime DATETIME;
BEGIN TRAN;
--SET STATISTICS TIME ON;
SET @StartTime = GETDATE();
WHILE (@Counter < 100000)
BEGIN
INSERT INTO #HasDefault ([SomeDate]) VALUES ('2017-05-15 10:11:12.000');
SET @Counter = @Counter + 1;
END;
SET @EndTime = GETDATE();
--SET STATISTICS TIME OFF;
COMMIT TRAN;
PRINT DATEDIFF(MILLISECOND, @StartTime, @EndTime);Test 2B
TRUNCATE TABLE #NoDefault;
GO
DECLARE @Counter INT = 0,
@StartTime DATETIME,
@EndTime DATETIME;
BEGIN TRAN;
--SET STATISTICS TIME ON;
SET @StartTime = GETDATE();
WHILE (@Counter < 100000)
BEGIN
INSERT INTO #NoDefault ([SomeDate]) VALUES ('2017-05-15 10:11:12.000');
SET @Counter = @Counter + 1;
END;
SET @EndTime = GETDATE();
--SET STATISTICS TIME OFF;
COMMIT TRAN;
PRINT DATEDIFF(MILLISECOND, @StartTime, @EndTime);You should see that there is no real difference in timing between tests 1A and 1B, or between tests 2A and 2B. So, no, there is no performance penalty to have a
DEFAULT defined but not used.Also, besides merely documenting intended behavior, you need to keep in mind that it is mostly you who cares about the DML statements being completely contained within your stored procedures. Support folks do not care. Future developers might not be aware of your desire to have all DML encapsulated within those stored procedures, or care even if they do know. And whoever maintains this DB after you are gone (either another project or job) might not care, or might not be able to prevent the use of an ORM no matter how much they protest. So, Defaults, can help in that they give folks an "out" when doing an
INSERT, especially an ad-hoc INSERT done by a support rep, as that is one column they no need to include (which is why I always use defaults on audit date columns).AND, it just occurred to me that it can be shown rather objectively whether or not a
DEFAULT is checked when the associated column is present in the INSERT statement: simply provide an invalid value. The following test does just that:-- DROP TABLE #BadDefault;
CREATE TABLE #BadDefault
(
[BadDefaultID] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
[SomeInt] INT NOT NULL DEFAULT (1 / 0)
);
INSERT INTO #BadDefault ([SomeInt]) VALUES (1234); -- Success!!!
SELECT * FROM #BadDefault; -- just to be sure ;-)
INSERT INTO #BadDefault ([SomeInt]) VALUES (DEFAULT); -- Error:
/*
Msg 8134, Level 16, State 1, Line xxxxx
Divide by zero error encountered.
The statement has been terminated.
*/
SELECT * FROM #BadDefault; -- just to be sure ;-)
GOAs you can see, when a column (and a value, not the keyword
DEFAULT) is pCode Snippets
SET NOCOUNT ON;
-- DROP TABLE #HasDefault;
CREATE TABLE #HasDefault
(
[HasDefaultID] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
[SomeInt] INT NULL,
[SomeDate] DATETIME NOT NULL DEFAULT (GETDATE())
);
-- DROP TABLE #NoDefault;
CREATE TABLE #NoDefault
(
[NoDefaultID] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
[SomeInt] INT NULL,
[SomeDate] DATETIME NOT NULL
);
-- make sure that data file and Tran Log file are grown, if need be, ahead of time:
INSERT INTO #HasDefault ([SomeInt])
SELECT TOP (2000000) NULL
FROM [master].sys.[all_columns] ac1
CROSS JOIN [master].sys.[all_columns] ac2;TRUNCATE TABLE #HasDefault;
GO
PRINT '#HasDefault:';
SET STATISTICS TIME ON;
INSERT INTO #HasDefault ([SomeDate])
SELECT TOP (1000000) '2017-05-15 10:11:12.000'
FROM [master].sys.[all_columns] ac1
CROSS JOIN [master].sys.[all_columns] ac2;
SET STATISTICS TIME OFF;
GOTRUNCATE TABLE #NoDefault;
GO
PRINT '#NoDefault:';
SET STATISTICS TIME ON;
INSERT INTO #NoDefault ([SomeDate])
SELECT TOP (1000000) '2017-05-15 10:11:12.000'
FROM [master].sys.[all_columns] ac1
CROSS JOIN [master].sys.[all_columns] ac2;
SET STATISTICS TIME OFF;
GOTRUNCATE TABLE #HasDefault;
GO
DECLARE @Counter INT = 0,
@StartTime DATETIME,
@EndTime DATETIME;
BEGIN TRAN;
--SET STATISTICS TIME ON;
SET @StartTime = GETDATE();
WHILE (@Counter < 100000)
BEGIN
INSERT INTO #HasDefault ([SomeDate]) VALUES ('2017-05-15 10:11:12.000');
SET @Counter = @Counter + 1;
END;
SET @EndTime = GETDATE();
--SET STATISTICS TIME OFF;
COMMIT TRAN;
PRINT DATEDIFF(MILLISECOND, @StartTime, @EndTime);TRUNCATE TABLE #NoDefault;
GO
DECLARE @Counter INT = 0,
@StartTime DATETIME,
@EndTime DATETIME;
BEGIN TRAN;
--SET STATISTICS TIME ON;
SET @StartTime = GETDATE();
WHILE (@Counter < 100000)
BEGIN
INSERT INTO #NoDefault ([SomeDate]) VALUES ('2017-05-15 10:11:12.000');
SET @Counter = @Counter + 1;
END;
SET @EndTime = GETDATE();
--SET STATISTICS TIME OFF;
COMMIT TRAN;
PRINT DATEDIFF(MILLISECOND, @StartTime, @EndTime);Context
StackExchange Database Administrators Q#173435, answer score: 25
Revisions (0)
No revisions yet.