HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

Should I lock the table during delete

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
thedeleteduringshouldtablelock

Problem

I have a table that has 4 million records. It has a clustered index on a date column (record creation date). It has 5 tables reference this table, all have FK indexes.

The machine has no down time. I had a program that clean up records that older than 31 days. It create a connection, Delete TOP 1000 rows, close the connection, and repeat until all old records are removed.

The delete has been very slow, it is deleting about 1000 rows per 10 sec. Ideally I want to do 1000 row per a second.

I notice that during the delete, it is performing a lot of page lock on the index.

I am wondering if there are faster way to delete data without causing timeout.

My idea is that would it be better if I do a table lock, perform the delete, wait for it for a sec so that it doesn't timeout other transaction, then perform the delete again.
My guess is that if I do a table lock, it should reduce the number of row locks or page locks, which may speed up the delete.

Any suggestions on this issue I have would help.

Please note that the harddrive or database isn't fragmented, and it is a RAID 10 machine.

[Update]
Thanks for asking for the performance execution plan. It looks like the live environment is different than my development environment. It is doing index scan rather than index seek. I think I have to investigate more about why it would do a index scan.

[Update 2] Here is the index that we have for some of those tables. Our index naming convention is [TableName]_[ColumnName], sorry we didn't use MSSQL naming standard. In addition, it turns out that client has a 96% fragmented index (VehicleLocationTP_VehicleLocationKey), that definitely is one of the problems. It may be a reason why SQL2005 would use index scan, rather than index seek.

[Update 3] I finally able to test the delete query on their testing server, instead on my own computer. They are running SQL 2005 Standard, verse SQL 2008 R2 Express on my machine. The indexes where about 95% fragmented, and rebuilding the i

Solution

Changing the locking to table locks will just make the deletes run even slower as the delete won't be able to run until the lock can be taken on the table which means that all other threads need to be finished or blocked. If you have foreign keys with delete cascade enabled that will probably take a lot of the time.

You might want to change it to a SQL Agent job so instead of running your app which connects and disconnects, you just run a loop deleting data until you are done.

SELECT NULL --Makes the WHILE loop work.
WHILE @@ROWCOUNT <> 0
BEGIN
    DELETE TOP (1000) FROM YourTable
    WHERE Column < getdate()-31
END


If this doesn't work you could look into table partitioning which would allow you to switch the data to another table very quickly, then truncate the data from the new table. This however does require Enterprise Edition.

Code Snippets

SELECT NULL --Makes the WHILE loop work.
WHILE @@ROWCOUNT <> 0
BEGIN
    DELETE TOP (1000) FROM YourTable
    WHERE Column < getdate()-31
END

Context

StackExchange Database Administrators Q#8973, answer score: 5

Revisions (0)

No revisions yet.