Sunday, March 18, 2007

Reducing SQL Server Deadlocks

Deadlocking occurs when two user processes have locks on separate objects and each process is trying to acquire a lock on the object that the other process has. When this happens, SQL Server ends the deadlock by automatically choosing one and aborting the process, allowing the other process to continue. The aborted transaction is rolled back and an error message is sent to the user of the aborted process. Generally, the transaction that requires the least amount of overhead to rollback is the transaction that is aborted.

As you might imagine, deadlocks can use up SQL Server's resources, especially CPU power, wasting it unnecessarily.

Most well-designed applications, after receiving a deadlock message, will resubmit the aborted transaction, which most likely can now run successfully. This process, if it happens often on your server, can drag down performance. If the application has not been written to trap deadlock errors and to automatically resubmit the aborted transaction, users may very well become confused as to what is happening when they receive deadlock error messages on their computer.

Here are some tips on how to avoid deadlocking on your SQL Server:

* Ensure the database design is properly normalized.
* Have the application access server objects in the same order each time.
* During transactions, don't allow any user input. Collect it before the transaction begins.
* Avoid cursors.
* Keep transactions as short as possible. One way to help accomplish this is to reduce the number of round trips between your application and SQL Server by using stored procedures or keeping transactions with a single batch. Another way of reducing the time a transaction takes to complete is to make sure you are not performing the same reads over and over again. If you do need to read the same data more than once, cache it by storing it in a variable or an array, and then re-reading it from there.
* Reduce lock time. Try to develop your application so that it grabs locks at the latest possible time, and then releases them at the very earliest time.
* If appropriate, reduce lock escalation by using the ROWLOCK or PAGLOCK.
* Consider using the NOLOCK hint to prevent locking if the data being locked is not modified often.
* If appropriate, use as low of an isolation level as possible for the user connection running the transaction.
* Consider using bound connections.

*****

When a deadlock occurs, by default, SQL Server choose a deadlock "victim" by identifying which of the two processes will use the least amount of resources to rollback, and then returns error message 1205.

But what if you don't like default behavior? Can you change it? Yes, you can, by using the following command:

SET DEADLOCK_PRIORITY { LOW NORMAL @deadlock_var }

WHERE:

Low tells SQL Server that the current session should be the preferred deadlock victim, not the session that incurs the least amount of rollback resources. The standard deadlock error message 1205 is returned.

Normal tells SQL Server to use the default deadlock method.

@deadlock_var is a character variable specifying which deadlock method you want to use. Specify "3" for low, or "6" for normal.

This command is set a runtime for a specified user connection.

*****

To help identify deadlock problems, use the SQL Server Profiler's Create Trace Wizard to run the "Identify The Cause of a Deadlock" trace. This will provide you with the raw data you need to help isolate the causes of deadlocks in your databases. [7.0]

*****

Consider turning on SQL Server deadlock tracing in order to track deadlocks, when you are having deadlock-related issues. The overhead for doing this is minimal.

DBCC TRACEON (3605,1204,-1)

Once this is run, then all deadlocking activity will be written to the SQL Server log file. [7.0, 2000]

*****

To help identify which tables or stored procedures are causing deadlock problems, turn on trace flag 1204 (outputs basic trace data) or trace flag 1205 (outputs more detailed trace data). Be sure to turn off this trace flag when you are done, as this trace can eat up SQL Server's resources unnecessarily, hurting performance. [6.5, 7.0, 2000]

*****

Ideally, deadlocks should be eliminated from your applications. But if you are unable to eliminate all deadlocks in your application, be sure to include program logic in your application to deal with killed transactions in a user-friendly way.

For example, let's say that two transactions are deadlocked and that SQL Server kills one of the transactions. In this case, SQL Server will raise an error message that your application needs to respond to. In most cases, you will want your application to wait a random amount of time after the deadlock in order to resubmit the killed transaction to SQL Server.

It is important that there is a random waiting period because it is possible that another contending transaction could also be waiting, and you don't want both contending transactions to wait the same amount of time and then both try to execute at the same time, causing another deadlock.

If your error-trapping code is good, the end user should never know that the deadlock even occurred. [6.5, 7.0, 2000]

*****

One way to help prevent deadlocks is to use the UPDLOCK optimizer hint. What this hint does is to force SQL Server to use an update lock instead of a shared lock.

A shared lock occurs when your query wants to read a row, but does not want to change it. Shared locks can co-exist with other shared locks, which means that multiple shared locks can exist on the same row, page, or table. A shared lock prevents the locked resource from receiving an exclusive lock, so if another user runs a query that need to UPDATE a row, they will not be able to until all of the shared locks on the row are gone. Once a shared lock is no longer needed by a query that is reading it, it is immediately released.

An update lock is a hybrid between an shared and an exclusive lock. Update locks are used when a query needs to update one or more records, but until the WHERE clause is complete, the query doesn't know which records it needs to UPDATE. So instead of putting an exclusive lock on every record (which reduces concurrency and your application's performance) it places a shared lock on the requisite records, and only when the WHERE clause is completed, are exclusive locks put on the records that need to be UPDATEDed. The shared locks created by an update lock are held until the WHERE clause is completed running, not immediately released after the row is read, as is normal with a shared lock.

So how does using the UPDLOCK hint help to reduce deadlocks? If you force the use of an update lock instead of a shared lock when reading tables, the UPDLOCK hint holds the locks until the end of your statement or transaction. This allows you to read data without blocking other users, and to update data later with the knowledge that the data has not changed since you last read it. This helps to prevent deadlocks because other users can't sneak in and lock a record that you will need before you need it. [6.5, 7.0, 2000] Updated 9-1-2005

No comments:





Google