Every statement in SQL server by default considered as read committed transaction unless you have specified specific transaction isolation mode. So if you have to for example delete hundreds of thousands records from a table it might put locks in your tables for long time. The way to go here is to do the long running query in separate units. Consider the following script.
CREATE TABLE #temp ( Id INT ) --this will insert dummy records for test INSERT #temp ( ID ) VALUES ( 1 ) GO 10000
essentially above script will insert 10000 records in to #temp table.Lets say you need to delete all those rows .At first glance it seems very trivial to write just one
delete from #temp
table statement however this might be long running query depending your server resources much better approach in situations like these is to use followin method. (This can be applied to any DML statement) If you are using SQL server 2000 you could use following script.
-- if you are using SQL server 2000 WHILE ( EXISTS ( SELECT * FROM #temp ) ) BEGIN SET ROWCOUNT 4 DELETE FROM #temp SET ROWCOUNT 0 END
If you are using SQL Server 2005+ you could use more sophisticated following method which uses “Top” TSQL keyword for DML statements which is not allowed in SQL server 2000.
-- if you are using SQL server 2005+ --Method 1 WHILE ( EXISTS ( SELECT * FROM #temp ) ) BEGIN DELETE TOP ( 4 ) FROM #temp END --Method 2 DECLARE @limit INT SET @limit = 4 again: DELETE TOP ( @limit ) FROM #temp IF @@ROWCOUNT != 0 GOTO again ;
So as you can see above methods partition the DML statements into chunks which will help other users to minimize locks.