patternsqlMinor
How does the log reader process bulk updates
Viewed 0 times
theprocesslogbulkdoeshowupdatesreader
Problem
Key Problem
The latency we are experiencing isn't applying the changes from the Distributor to the Subscribers (if we watch Replication Monitor it's latency is usually
Current update process
The majority of applications that update this table do so by looping through their collection of objects, applying the change to that single object then committing their change to the database before moving onto the next object.
From a DB trace perspective this means we get up to 120 update statements when the change occurs
Example
Proposed update process
As the object collections are actually based around the group ID's, one potential improvement is to do a bulk update (rather than individual updates) and then refresh the object collection. Reducing the number of update statements to 1 or 2 (depending on the business scenarios).
Example
Impact on LogReader?
From an app processing perspective doing a single update makes sense to me (fewer round trips between the app and the database = quicker), however I'm not sure how the LogReader will treat both scenarios, as it needs to process each record updated by the transaction..
Will the LogReader process those records faster or slower on a bulk update?
The latency we are experiencing isn't applying the changes from the Distributor to the Subscribers (if we watch Replication Monitor it's latency is usually
Current update process
The majority of applications that update this table do so by looping through their collection of objects, applying the change to that single object then committing their change to the database before moving onto the next object.
From a DB trace perspective this means we get up to 120 update statements when the change occurs
Example
update [myTable] set Suspended = @Suspended
where myID = @pIDProposed update process
As the object collections are actually based around the group ID's, one potential improvement is to do a bulk update (rather than individual updates) and then refresh the object collection. Reducing the number of update statements to 1 or 2 (depending on the business scenarios).
Example
update [myTable] set Suspended = @Suspended
where myGroupID1 = @groupID1
and myGroupID1 = @groupID2Impact on LogReader?
From an app processing perspective doing a single update makes sense to me (fewer round trips between the app and the database = quicker), however I'm not sure how the LogReader will treat both scenarios, as it needs to process each record updated by the transaction..
Will the LogReader process those records faster or slower on a bulk update?
Solution
In the log records those are separate statements that are written to the log. The log reader would then batch those up and run them as a single transaction against the database.
If your database uses stored procedures, you can replicate the stored procedures, which would then allow the stored procedure call to be replicated instead of the actual data updates.
If your database uses stored procedures, you can replicate the stored procedures, which would then allow the stored procedure call to be replicated instead of the actual data updates.
Context
StackExchange Database Administrators Q#3386, answer score: 3
Revisions (0)
No revisions yet.