patternsqlCritical
Asked to Not Use Transactions and to Use A Workaround to Simulate One
Viewed 0 times
workaroundaskedoneandtransactionsusenotsimulate
Problem
I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.
I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.
I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.
Example:
Here is what they suggested that I do.
`GO
CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGI
I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.
I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.
Example:
CREATE SCHEMA someschema;
GO
CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO
CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO
CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;
INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
--Implement error
SELECT 1/0
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO
Here is what they suggested that I do.
`GO
CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGI
Solution
You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (
Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.
-
Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.
-
Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.
-
Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.
-
Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.
Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can come up with a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)
In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.
begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails). Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.
-
Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.
-
Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.
-
Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.
-
Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.
Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can come up with a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)
In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.
Context
StackExchange Database Administrators Q#248677, answer score: 63
Revisions (0)
No revisions yet.