HiveBrain v1.2.0
Get Started
← Back to all entries
patternModerate

How is SQL Server returning both a new value and old value during an UPDATE?

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
updatenewsqlduringandvaluebothreturninghowserver

Problem

We've had issues, during high concurrency, of queries returning non-sensical results - results the violate the logic of the queries being issued. It took a while to reproduce the issue. I've managed to distil the reproducible problem down to a few handfuls of T-SQL.


Note: The part of the live system having the issue is composed of 5 tables, 4 triggers, 2 stored procedures, and 2 views. I've simplified down the real system into something much more manageable for a posted question. Things have been pared down, columns removed, stored procedures made inline, views turned into common table expressions, values of columns changed. This is all a long way of saying that while what follows reproduces an error, it may be more difficult to understand. You'll have to refrain from wondering why something is structured the way it is. I'm here trying to figure out why the error condition reproducibly happens in this toy model.

```
/*
The idea in this system is that people are able to take days off.
We create a table to hold these "allocations",
and declare sample data that only 1 production operator
is allowed to take time off:
*/
IF OBJECT_ID('Allocations') IS NOT NULL DROP TABLE Allocations
CREATE TABLE [dbo].Allocations PRIMARY KEY NOT NULL,
Available int NOT NULL
)
--Sample allocation; there is 1 avaialable slot for this job
INSERT INTO Allocations(JobName, Available)
VALUES ('Production Operator', 1);

/*
Then we open up the system to the world, and everyone puts in for time.
We store these requests for time off as "transactions".
Two production operators requested time off.
We create sample data, and note that one of the users
created their transaction first (by earlier CreatedDate):
*/
IF OBJECT_ID('Transactions') IS NOT NULL DROP TABLE Transactions;
CREATE TABLE [dbo].Transactions NOT NULL,
ApprovalStatus varchar(50) NOT NULL,
CreatedD

Solution

The default READ COMMITTED transaction isolation level guarantees that your transaction will not read uncommitted data. It does not guarantee that any data you read will remain the same if you read it again (repeatable reads) or that new data will not appear (phantoms).

These same considerations apply to multiple data accesses within the same statement.

Your UPDATE statement produces a plan that accesses the Transactions table more than once, so it is susceptible to effects caused by non-repeatable reads and phantoms.

There are multiple ways for this plan to produce results you do not expect under READ COMMITTED isolation.
An example

The first Transactions table access finds rows that have a status of WaitingList. The second access counts the number of entries (for the same job) that have a status of Booked. The first access may return only the later transaction (the earlier one is Booked at this point). When the second (counting) access occurs, the earlier transaction has been changed to WaitingList. The later row therefore qualifies for the update to Booked status.
Solutions

There are several ways to set the isolation semantics to get the results you are after, primarily by using the right isolation level.

Using READ_COMMITTED_SNAPSHOT would not be a solution. This implementation of read committed isolation does provide statement-level read consistency (where non-repeatable reads and phantoms are not possible) but SQL Server takes update locks when locating rows to update. This means the access path used to locate rows to update always reads the most current committed data. A second read of the same data structure in the same statement would read versioned rows.

For more details, see my article Data Modifications under Read Committed Snapshot Isolation.

The SNAPSHOT isolation level would provide a consistent snapshot as of the start of the transaction for all reads, but you may encounter write conflict errors.
Other remarks

I have to say though that I would not have designed the schema or query this way. There is rather more work involved than should be necessary to meet the stated business requirement. Perhaps this is partly the result of the simplifications in the question, in any case that is a separate question.

The behaviour you are seeing does not represent a bug of any kind. The scripts produce correct results given the requested isolation semantics. Concurrency effects like this are also not limited to plans which access data multiple times.

The read committed isolation level provides many fewer guarantees than are commonly assumed. For example, skipping rows and/or reading the same row more than once is perfectly possible.

Context

StackExchange Database Administrators Q#58042, answer score: 12

Revisions (0)

No revisions yet.