patternsqlMinor
SQL deadlock on nonclustered key caused by two INSERTs and a CHECK CONSTRAINT on the same table
Viewed 0 times
samethesqldeadlockinsertsnonclusteredtwoconstraintandcheck
Problem
Been struggling with deadlocking on a table during INSERTs. It's a multi-tenant database and Read Committed Snapshot Isolation (RCSI) is enabled.
There is a CHECK CONSTRAINT upon INSERT to ensure there can be no overlapping bookings (by smalldatetime regardless of event) which executes a Scalar Valued Function and checks for a result of 1. This constraint looks up the same table with a READCOMMITTEDLOCK hint to check for violations of the logic where the ID (PK/clustered index) doesn't equal the ID of the newly inserted row.
The READCOMMITTEDLOCK hint was used due to RCSI being enabled and wanting to ensure we don't skip rows, which could lead to overlapping bookings.
The constraint does an INDEX SEEK on the index causing the deadlock: idx_report_foobar.
Any assistance would be greatly appreciated.
Here is the XML (which has been adjusted to remove some of the logic and names of table fields which are in the database):
```
IF EXISTS (SELECT *
FROM dbo.bookings a WITH (READCOMMITTEDLOCK)
WHERE a.id <> @id
AND a.userID = @userID
AND @bookingStart < a.bookingEnd
AND a.bookingStart < @bookingEnd
AND a.eventID = @eventID
unknown
unknown
(@0 datetime2(7),@1 datetime2(7),@2 int,@3 int,@4 int,@5 int,@6 int,@7 nvarchar(4000),@8 datetime2(7),@9 nvarchar(50),@10 int,@11 nvarchar(255))INSERT [dbo].bookings
VALUES (@0, @1, @2, @3, @4, @5, @6, @7, @8, NULL, @9, @10, @11, NULL, NULL)
SELECT [Id]
FROM [dbo].[bookings]
WHERE @@ROWCOUNT > 0 AND [Id] = scope_identity()
IF EXISTS (SELECT *
FROM dbo.bookings a WITH (READCOMMITTEDLOCK)
WHERE a.id <> @id
AND a.userID = @userID
AND @bookingStart <
There is a CHECK CONSTRAINT upon INSERT to ensure there can be no overlapping bookings (by smalldatetime regardless of event) which executes a Scalar Valued Function and checks for a result of 1. This constraint looks up the same table with a READCOMMITTEDLOCK hint to check for violations of the logic where the ID (PK/clustered index) doesn't equal the ID of the newly inserted row.
The READCOMMITTEDLOCK hint was used due to RCSI being enabled and wanting to ensure we don't skip rows, which could lead to overlapping bookings.
The constraint does an INDEX SEEK on the index causing the deadlock: idx_report_foobar.
Any assistance would be greatly appreciated.
Here is the XML (which has been adjusted to remove some of the logic and names of table fields which are in the database):
```
IF EXISTS (SELECT *
FROM dbo.bookings a WITH (READCOMMITTEDLOCK)
WHERE a.id <> @id
AND a.userID = @userID
AND @bookingStart < a.bookingEnd
AND a.bookingStart < @bookingEnd
AND a.eventID = @eventID
unknown
unknown
(@0 datetime2(7),@1 datetime2(7),@2 int,@3 int,@4 int,@5 int,@6 int,@7 nvarchar(4000),@8 datetime2(7),@9 nvarchar(50),@10 int,@11 nvarchar(255))INSERT [dbo].bookings
VALUES (@0, @1, @2, @3, @4, @5, @6, @7, @8, NULL, @9, @10, @11, NULL, NULL)
SELECT [Id]
FROM [dbo].[bookings]
WHERE @@ROWCOUNT > 0 AND [Id] = scope_identity()
IF EXISTS (SELECT *
FROM dbo.bookings a WITH (READCOMMITTEDLOCK)
WHERE a.id <> @id
AND a.userID = @userID
AND @bookingStart <
Solution
It would be easier to narrow this down if the parameter values being passed into the
You have two different rows being fought over. The process itself does the following:
So, in two different sessions, at the same time, and for the same
The problem here is that the verification process is happening at the wrong point in time / sequence in the workflow. Because it is being handled via a
Here are some options:
-
The simplest, yet certainly non-ideal option, is to switch to reading Uncommited / dirty data. For this you would switch to using the
In both cases, neither entry gets saved.
-
Try removing the
The requirement is that the index needs to be
-
Handle this logic outside of the
-
If you need to allow for variations in the "start" and "end" dates for the
The idea here is to first create the "app lock" to force single-threading for just this
For more info on app locks, please see the following two answers of mine, also here on DBA.SE (both of which have links to t
dbo.CheckForDoubleBookings Scalar UDF were available, but I think there is enough info here to offer an educated guess:You have two different rows being fought over. The process itself does the following:
- Insert a row
CHECK CONSTRAINTfires: Within the explicit / implicit or auto-commit transaction that is theINSERTstatement, verify the condition via the scalar UDF.
- If the UDF returns a
1, implicitly issue aROLLBACKand exception.
COMMIT
So, in two different sessions, at the same time, and for the same
userID, you have Step 1 (above) occur, which inserts a row and maintains an eXclusive lock on that row. Then, in each session, prior to completing the INSERT, Step 2 occurs which processes the Check Constraint that checks for potentially matching rows (same four properties but with a different id). You are only reading "committed" data, but the new rows have not committed yet since each one is waiting upon verification of their respective Check Constraint.The problem here is that the verification process is happening at the wrong point in time / sequence in the workflow. Because it is being handled via a
CHECK CONSTRAINT, the row has already been added (just not Committed) which is how it gets an id assigned to pass into the Scalar UDF. And then the process doesn't want to Commit until it is certain that no matching entry exists, yet it can't see the other entry since that one also has not yet committed for the same reason.Here are some options:
-
The simplest, yet certainly non-ideal option, is to switch to reading Uncommited / dirty data. For this you would switch to using the
NOLOCK hint instead of the current READCOMMITTEDLOCK. The problem here is that while it would work most of the time, you could have a scenario where either:- both sessions see the other and decide to abort, or
- one session sees the other, decides to abort, but then the first session has it's
INSERTaborted for a completely different reason.
In both cases, neither entry gets saved.
-
Try removing the
CHECK CONSTRAINT and adding the IGNORE_DUP_KEY = ON option to that non-Clustered Index. This would cause the situation where if both happen at the same time, one would commit and the other would silently fail (well, you would get a warning) and not insert anything. That should be fine since you check @@ROWCOUNT afterwards.The requirement is that the index needs to be
UNIQUE. If eventID by itself is unique, then just add the UNIQUE keyword to the index definition. If eventID by itself is not unique, then move what are currently the INCLUDE columns into being key columns until you can establish uniqueness.-
Handle this logic outside of the
INSERT statement. You can test ahead of time for the existence of a matching row, and if nothing is found then do the INSERT. And, since sometimes that "check" will fail if it executes at the exact same moment as another INSERT of those same values, then one will fail. So, that is handled by wrapping the INSERT in a TRY...CATCH construct and ignoring the error if it occurs:DECLARE @NewID INT;
BEGIN TRY
IF (NOT EXISTS(SELECT * FROM dbo.Table WHERE columns = @parameters))
BEGIN
INSERT INTO dbo.Table (columns) VALUES (@parameters);
SET @NewID = SCOPE_IDENTITY();
END;
END TRY
BEGIN CATCH
-- Either return custom error, or handle in a different way, such as
-- selecting and returning the `id` matching the same criteria that
-- was the basis of the failed `INSERT`, such as:
SELECT @NewID = [id]
FROM dbo.Table
WHERE columns = @parameters
END CATCH;-
If you need to allow for variations in the "start" and "end" dates for the
@userID, @eventID combination and the @userID, @eventID combination is not unique (hence option #2 above won't work), then you might need to consider using an app_lock where the "resource" is a custom string comprised of either the @userID, @eventID combination or maybe just userID. This would prevent more than one lock at a time for that combination. But, that combination already existing in the Table for past dates prior to the currently requested dates would not be an issue because an app_lock only locks the "resource", which is a specific string, and has nothing to do with the data in the table (in fact, the 'app lock" wouldn't even know of the Table).The idea here is to first create the "app lock" to force single-threading for just this
userID so that the process can a) check to see if overlapping dates exist, and b) insert if no overlapping dates are found. Other sessions for the same userID will be blocked until the "app lock" is released, at which point they (one at a time per userID) will check to see if overlapping dates exist.For more info on app locks, please see the following two answers of mine, also here on DBA.SE (both of which have links to t
Code Snippets
DECLARE @NewID INT;
BEGIN TRY
IF (NOT EXISTS(SELECT * FROM dbo.Table WHERE columns = @parameters))
BEGIN
INSERT INTO dbo.Table (columns) VALUES (@parameters);
SET @NewID = SCOPE_IDENTITY();
END;
END TRY
BEGIN CATCH
-- Either return custom error, or handle in a different way, such as
-- selecting and returning the `id` matching the same criteria that
-- was the basis of the failed `INSERT`, such as:
SELECT @NewID = [id]
FROM dbo.Table
WHERE columns = @parameters
END CATCH;-- Either prevent multi-row inserts OR remove this "IF" block and wrap the rest
-- of the logic in a cursor that will process 1 row at a time from "inserted".
IF ((SELECT COUNT(*) FROM inserted) > 1)
BEGIN
ROLLBACK;
RAISERROR(N'Slow your roll, yo! One event at a time, ya dig?', 16, 1);
END;
DECLARE @Resource NVARCHAR(150);
SET @Resource = N'New Booking for: ' + CONVERT(NVARCHAR(20), @userID);
EXEC sys.sp_getapplock @Resource, other options;
IF (NOT EXISTS (SELECT *
FROM dbo.bookings a WITH (READCOMMITTEDLOCK)
WHERE a.id <> @id
AND a.userID = @userID
AND @bookingStart < a.bookingEnd
AND a.bookingStart < @bookingEnd
AND a.eventID = @eventID))
BEGIN
INSERT INTO dbo.bookings (columns) VALUES (@values);
END;
EXEC sys.sp_releaseapplock @Resource;Context
StackExchange Database Administrators Q#199824, answer score: 5
Revisions (0)
No revisions yet.