patternsqlMinor
transaction across multiple databases - what is the overhead?
Viewed 0 times
thedatabaseswhattransactionmultipleacrossoverhead
Problem
I run the following update, which is one single transaction across 2 databases.
Please note I have not committed the transaction.
I get the following result from the update above:
(196 row(s) affected) 196 updating DEStock.DBO.ItemStock
(196 row(s) affected) 196 updating USStock.DBO.ItemStock
But when I look at the transaction logs of the databases in question I get the same view on their logs:
The spid in each database is the same, 6434, however the transaction_id seems to be different in each database.
How is it so?
Also, when running the following script AFTER I did the ROLLBACK on the updtes above I was still getting the same results as before the ROLLBACK.
I had to add the option recompile so that
Please note I have not committed the transaction.
--================================================================
--RUN THE UPDATE
--================================================================
BEGIN TRANSACTION T1_radhe
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
SELECT @@TRANCOUNT
BEGIN TRY
UPDATE DEStock.DBO.ItemStock
SET QtyOnOrder = 0 ,
DueDate = NULL
FROM DEStock.DBO.ItemStock T
INNER JOIN TABLEBACKUPS.DBO.__RADHE R
ON T.ITEMNO = R.ITEMNO
print cast ( @@rowcount as varchar) + ' updating DEStock.DBO.ItemStock '
UPDATE USStock.DBO.ItemStock
SET QtyOnOrder = 0 ,
DueDate = NULL
FROM USStock.DBO.ItemStock T
INNER JOIN TABLEBACKUPS.DBO.__RADHE R
ON T.ITEMNO = R.ITEMNO
print cast ( @@rowcount as varchar) + ' updating USStock.DBO.ItemStock '
--COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
DECLARE @ErrorMessage NVARCHAR(4000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
WHILE @@TRANCOUNT > 0
ROLLBACK TRANSACTION
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
RAISERROR (@ErrorMessage, -- Message text.
@ErrorSeverity, -- Severity.
@ErrorState -- State.
);
END CATCHI get the following result from the update above:
(196 row(s) affected) 196 updating DEStock.DBO.ItemStock
(196 row(s) affected) 196 updating USStock.DBO.ItemStock
But when I look at the transaction logs of the databases in question I get the same view on their logs:
The spid in each database is the same, 6434, however the transaction_id seems to be different in each database.
How is it so?
Also, when running the following script AFTER I did the ROLLBACK on the updtes above I was still getting the same results as before the ROLLBACK.
I had to add the option recompile so that
Solution
the spid in each database is the same, 6434, however the transaction_id seems to be different in each database.
how is it so?
What you have is wrong (may be you are looking at an older transaction which has the same name name).
From my test, I can see that the transaction IDs match (see below)
Using
You can easily check if the database is part of cross database transaction using
what is the overhead of having transactions across multiple databases?
A transaction itself does not have any cost associated (or to be more specific, it is very very negligible) To quote from my answer :
Transactions are needed to take the database from one consistent state into another consistent state. Transactions have no cost as there is no alternative to transactions.
Any modification to the database will generate log records, use memory and CPU along with tempdb space (depends on if you are doing a big sort and then updating or deleting records then spills to tempdb will be seen or if you are using snapshot isolation level, etc).
is there a way to make these transactions run quicker?
Yes, do your updates in batches or chunks. First select and update only those batches and make sure you have proper indexes on the tables involved, so that sql server will just seek the rows and update them.
I was thinking about DISTRIBUTED TRANSACTIONS.
I would avoid distributed transactions as they are not supported in database mirroring or AlwaysON (SQL Server 2016 has limited support) and if you are lucky - you will get an Orphan DTC Transaction SPID.
As a side note,
Unlike a read committed scan, a repeatable read scan retains locks on every row it touches until the end of the transaction. Even rows that do not qualify for the query result remain locked. These locks ensure that the rows touched by the query cannot be updated or deleted by a concurrent session until the current transaction completes (whether it is committed or rolled back). These locks do not protect rows that have not yet been scanned from updates or deletes and do not prevent the insertion of new rows amid the rows that are already locked.
how is it so?
What you have is wrong (may be you are looking at an older transaction which has the same name name).
From my test, I can see that the transaction IDs match (see below)
Using
sp_WhoIsActive with @get_transaction_info = 1, I get below transaction detailtest_HV: 15 (1 kB),tempdb: 0 (0 kB),AdventureWorks_2005: 0 (0 kB)You can easily check if the database is part of cross database transaction using
-- check if database is involved in Cross database transactions
if exists (
select *
from fn_dblog(null, null)
where Operation = 'LOP_PREP_XACT'
and [Master DBID] <> 0
)
print 'Based on the active part of the transaction log read, there is evidence that this database has participated in cross-database transactions.'
else
print 'Based on the active part of the transaction log read, there is no evidence of this database having participated in cross-database transactions.'
---- check for Distributed Transaction Coordinator involvement (below)
if exists (
select *
from fn_dblog(null, null)
where Operation = 'LOP_PREP_XACT'
and [Master DBID] = 0
)
print 'Based on the active part of the transaction log read, there is evidence that this database has participated in distributed transactions.'
else
print 'Based on the active part of the transaction log read, there is no evidence of this database having participated in distributed transactions.'what is the overhead of having transactions across multiple databases?
A transaction itself does not have any cost associated (or to be more specific, it is very very negligible) To quote from my answer :
Transactions are needed to take the database from one consistent state into another consistent state. Transactions have no cost as there is no alternative to transactions.
Any modification to the database will generate log records, use memory and CPU along with tempdb space (depends on if you are doing a big sort and then updating or deleting records then spills to tempdb will be seen or if you are using snapshot isolation level, etc).
is there a way to make these transactions run quicker?
Yes, do your updates in batches or chunks. First select and update only those batches and make sure you have proper indexes on the tables involved, so that sql server will just seek the rows and update them.
I was thinking about DISTRIBUTED TRANSACTIONS.
I would avoid distributed transactions as they are not supported in database mirroring or AlwaysON (SQL Server 2016 has limited support) and if you are lucky - you will get an Orphan DTC Transaction SPID.
As a side note,
- If you are using
sys.fn_dblogfor educational purpose or on non-PROD environments, you should be good. I would avoid using it in PROD environments!
- You are using
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ. This means that you will have phantom rows (which can lead to unexpected results) - so just making sure you are aware of what you are using.
Unlike a read committed scan, a repeatable read scan retains locks on every row it touches until the end of the transaction. Even rows that do not qualify for the query result remain locked. These locks ensure that the rows touched by the query cannot be updated or deleted by a concurrent session until the current transaction completes (whether it is committed or rolled back). These locks do not protect rows that have not yet been scanned from updates or deletes and do not prevent the insertion of new rows amid the rows that are already locked.
Code Snippets
test_HV: 15 (1 kB),tempdb: 0 (0 kB),AdventureWorks_2005: 0 (0 kB)-- check if database is involved in Cross database transactions
if exists (
select *
from fn_dblog(null, null)
where Operation = 'LOP_PREP_XACT'
and [Master DBID] <> 0
)
print 'Based on the active part of the transaction log read, there is evidence that this database has participated in cross-database transactions.'
else
print 'Based on the active part of the transaction log read, there is no evidence of this database having participated in cross-database transactions.'
---- check for Distributed Transaction Coordinator involvement (below)
if exists (
select *
from fn_dblog(null, null)
where Operation = 'LOP_PREP_XACT'
and [Master DBID] = 0
)
print 'Based on the active part of the transaction log read, there is evidence that this database has participated in distributed transactions.'
else
print 'Based on the active part of the transaction log read, there is no evidence of this database having participated in distributed transactions.'Context
StackExchange Database Administrators Q#116295, answer score: 3
Revisions (0)
No revisions yet.