Wednesday, March 21, 2007

Error Handling in SQL Server - I


This is one of two articles about error handling in SQL Server. This article
focuses on how SQL Server – and to some extent ADO – behave when an error
occurs. The other article, Implementing Error Handling with Stored Procedures, gives advice for how you should
check for errors when you write stored procedures. Logically, this article is
part one, and Implementing... is part two. However, you can read the articles in any order, and if you are relatively new to SQL Server, I recommend that you start with Implementing....
The article here gives a deeper background and may answer more advanced users'
questions about error handling in SQL Server.




Note: this article was written for SQL 2000 and most of the material
applies to SQL7 and SQL 6.5 as well. On the other hand, the article is less
applicable to SQL 2005 as this version offers radically impoved methods for
error handling, that beside a short section I am silent on. I hope to cover SQL 2005 in a future article.



Table of Contents:



Introduction


The Basics

The Anatomy of an Error Message

How to Detect an Error in T-SQL – @@error

Return Values from Stored Procedures

@@rowcount

@@trancount

More on Severity Levels


What Happens when an Error Occurs?

The Possible Actions

When Does SQL Server Take which Action?

Connection-termination

Scope-abortion

Statement-termination and Batch-abortion

Trigger Context


Errors in User-Defined Functions

Control Over Error Handling

SET XACT_ABORT

ARITHABORT, ARITHIGNORE and ANSI_WARNINGS

RAISERROR WITH NOWAIT

Duplicates

Using Linked Servers


Retrieving the Text of an
Error Message


TRY-CATCH in SQL 2005

Client-side Error Handling

DB-Library

ODBC

ADO


ADO .Net

Acknowledgements and Feedback

Revision History


Introduction


In many aspects SQL Server is a very good DBMS that permits you implement
powerful solutions with good performance. However, when it comes to error
handling... To be blunt: error handling in SQL Server is poor. It is a patchwork of
not-always-so-consistent behaviour.
It's also weak in that you have fairly little control over error handling,
and for advanced error handling like suppressing errors or logging errors,
you must take help from the client-side. Unfortunately, depending on which client library you use, you may find that
the client library has its own quirks, sometimes painting you into a corner where
there is no real good solution.



In this article, I will first look at what parts an error message consists
of, and how you can detect that an error has occurred in T-SQL code. Next, I
describe the possible actions can SQL Server can take in case of an error. I
then proceed to describe the few possibilities you have to control SQL
Server's error handling. Finally, there is a section on how the different
client libraries from Microsoft behave, with most of the focus on ADO and ADO
.Net.




General disclaimer: whereas some information in this text is drawn from
Books Online and other documentation from Microsoft, a lot of what I say is
based on observations that I have made from working with SQL Server, and far
from all of this is documented in Books Online. Therefore, you should be wary to rely on a specific behaviour like "this
error have this-and-this effect", as it could be different in another version of
SQL Server, even different between service packs.


The Basics


The Anatomy of an Error Message




Here is a typical error message you can get from SQL Server when working from
Query Analyzer.


Server: Msg 547, Level 16, State 1, Procedure error_demo_sp, Line 2
UPDATE statement conflicted with COLUMN FOREIGN KEY constraint 'fk7_acc_cur'.
The conflict occurred in database 'bos_sommar', table 'currencies', column 'curcode'.
The statement has been terminated.

Note: Under Tools->Options->Connections, I have checked
Parse ODBC Message Prefixes.

The error information that SQL

Server passes to the client consists of several components, and the client is
responsible for the final interpretation of the message. These are the
components that SQL
Server passes to the client.


Message number – each error message has a number. You can find most of the
message numbers in the table sysmessages in the master database. (There some
special numbers like 0 and 50000 that do not appear there.) In this example,
the message number is 547. Since most interesting messages are errors, I will
also use the term error number. Message numbers from 50001 and up are
user-defined. Lower numbers are system defined.



Severity level – a number from 0 to 25. The stort story is that if the
severity level is in the range 0-10, the message is informational or a
warning, and not an error. Errors resulting from programming errors in your
SQL code have a severity level in the range 11-16. Severity levels 17-25
indicate resource problems, hardware problems or internal problems in SQL
Server, and if the severity is 20 or higher, the connection is terminated.
For the long story, see the section More on Severity
Levels
for some interesting tidbits. For system messages you can find
the severity level in master..sysmessages, but for some messages SQL

Server employs a different severity level than what's in sysmessages.


State – a value between 0 and 127. The meaning of this item is specific to the error message, but
Microsoft has not documented these values, so this value is rarely of
interest to you.


Procedure – in which stored procedure, trigger or user-defined
function the error occurred. Blank if the error occurred in a plain batch of
SQL statements (including dynamic SQL).



Line – Line number within the procedure/function/trigger/batch the error
occurred. A line number of 0 indicates that the problem occurred when the
procedure was invoked.



Message text – the actual text of the message that tells you what went
wrong. You can find this text in master..sysmessages, or rather a template for
it, with placeholders for names of databases, tables etc.



As I mentioned the client is responsible for the formatting of the error message,
and for messages with a severity level with 10 or lower, most client programs print
only the message text, but not severity level, procedure etc. In fact, we see an example of this above. The text The
statement has been terminated
is a message on its own, message 3621.




When you write your own client program, you can choose your own way to display error messages. You may be somewhat constrained by what your client library
supplies to you. The full information is available with low-level interfaces such as
DB-Library, ODBC or the OLE DB provider for SQL Server. On the other hand, in
ADO you only have access to the error number and the text of the message.



There are two ways an error message can appear: 1) an SQL statement can result
in an error (or a warning) 2) you emit it yourself with RAISERROR (or PRINT).
Let's take a brief look at RAISERROR here. Here is sample statement:



RAISERROR('This is a test', 16, 1)


Here you supply the message text, the severity level and the state. The output
is:


Server: Msg 50000, Level 16, State 1, Line 1
This is a test


Thus, SQL Server supplies the message number 50000, which is the error number
you get when you supply a text string to RAISERROR. (There is no procedure name
here, since I ran the statement directly from Query Analyzer.) Rather than a string, you could have a
supplied a number of 50001 or greater, and SQL Server would have looked up that
number in sysmessages to find the message text. You would have stored that
message with the system procedure sp_addmessage. (If you just supply a
random number, you will get an error message, saying that the message is
missing.) Whichever method you use, the message can include placeholders, and
you can provide values for these placeholders as parameters to RAISERROR,
something I do not cover here. Please refer to Books Online for details.



As I mentioned State is rarely of interest. With RAISERROR, you can use
it as you wish. If you raise the same message in several places, you can provide
different values to State so that you can conclude which RAISERROR
statement that
fired. The command-line tools OSQL and ISQL have a special handling of state: if you use a state of
127, the two tools abort and set the DOS variable ERRORLEVEL to the
message number. This can
be handy in installation scripts if you want to abort the script if you detect
some serious condition. (For instance, that database is not on the level that the
installation script is written for.) This behaviour is entirely
client-dependent; for instance, Query Analyzer does
not react on state 127.



How to Detect an Error in T-SQL – @@error



After each statement in T-SQL, with one single exception that I cover in the
next section, SQL Server sets the global variable @@error to 0,
unless an error occurs, in which case @@error is set to the number of that
error. (Note: these days, the SQL Server documentation refers to @@error
as a "function". Being an old-timer, I prefer "global variables" for the
entities whose names that start with @@.)




More precisely, if SQL Server emits a message with a severity of 11 or higher, @@error will hold the number of that message. And if SQL
Server emits a message with a severity level of 10 or lower, SQL Server does not
set @@error, and thus you cannot tell from T-SQL that the message was produced.



But the message number is also the only field of the error message that you
easily can access from T-SQL. A common question on the newsgroups
is how to retrieve the text of an error message, and for a long time the answer
was "you can't". But recently Mark Williams pointed out to me a way to do it,
but as it requires that the user has sysadmin privileges, you cannot
easily put in an application. I will return to this topic in the section

Retrieving the Text of an Error Message
.

There is no way to prevent SQL Server from raising error messages.

There is a small set of conditions for which you can use SET commands to control
whether these conditions are errors or not. We will look closer at these possibilities
later, but I repeat that this is a small set, and there is

no general way in
T-SQL to suppress error messages. You will need to take care of that in your
client code. (Another common question on the newsgroups.)



As I mentioned, @@error is set after each statement. Therefore,
you should always save the save the value of @@error into a local variable,
before you do anything with it. Here is an example of what happens if you don't:


CREATE TABLE notnull(a int NOT NULL)
DECLARE @value int
INSERT notnull VALUES (@value)
IF @@error <> 0
PRINT '@@error is ' + ltrim(str(@@error)) + '.'



The output is:


Server: Msg 515, Level 16, State 2, Line 3
Cannot insert the value NULL into column 'a', table
'tempdb.dbo.notnull'; column does not allow nulls. INSERT fails.
The statement has been terminated.
@@error is 0.

Here is the correct way.


CREATE TABLE notnull(a int NOT NULL)
DECLARE @err int,
@value int
INSERT notnull VALUES (@value)
SELECT @err = @@error
IF @err <> 0
PRINT '@err is ' + ltrim(str(@err)) + '.'

The output is:
Server: Msg 515, Level 16, State 2, Line 3
Cannot insert the value NULL into column 'a', table
'tempdb.dbo.notnull'; column does not allow nulls. INSERT fails.
The statement has been terminated.
@err is 515.

Return Values from Stored Procedures

All stored procedures have a return value, determined by the RETURN statement.

The RETURN statement takes one optional argument, which should be a numeric
value. If you say RETURN without providing a
value, the return value is 0 if there is no error during execution. If an
error occurs during execution of the procedure, the return value may be 0, or it may be a negative number. The
same is true if there is no RETURN statement at all in the procedure: the return
value may be a negative number or it may be 0.


Whether these negative numbers have any meaning, is a bit difficult to tell. It used to be the
case, that the return values -1 to -99 were reserved for system-generated
return values, and Books Online for earlier
versions of SQL Server specified meanings for values -1 to -14. However,
Books Online for SQL 2000 is silent on any such reservations, and does
not explain what -1 to -14 would mean.

With some occasional exception, the system stored procedures that Microsoft

ships with SQL Server return 0 to indicate success and any non-zero value
indicates failure.

While there is no law that requires you to follow the same convention for your

stored procedures, my strong recommendation is that you use return values
solely to indicate success/failure. If you want to return data such as the id for an
inserted row, number of affected rows or whatever, use an OUTPUT parameter
instead. It follows from the fact that a blank RETURN may return 0, even if
there has been an error during execution, that you should be careful to return
an explict value yourself if an error occurs in the procedure.


There is one situation when a
stored procedure does not return any value at all, leaving the variable
receiving the return value unaffected. This is when the procedure is aborted because of a
scope-aborting error. We will look more into this later.
There is also one situation when the return value is NULL: this happens with remote procedures and occurs when the batch is aborted on
the remote server. (Batch-abortion is also
something we will look into more later on.)


There is one curious exception to the rule that @@error is set after each

statement: a RETURN without parameters does not change the value of @@error, but leaves the
variable unchanged. In my opinion, this is not really practically useful.
(I owe this information to a correspondent who gave me this tip by
e-mail. Alas, I lost his mail due to problems at my ISP, so I can credit him
by name.)


@@rowcount



@@rowcount is a global variable reports the number of affected rows
in the most recently executed statement. Just like @@error you need to save it
in a local variable if you want to use the value later, since @@rowcount is set
after each statement. Since with SET you can only assign variable
at a time, you must use SELECT if you need to save both @@error and @@rowcount
into local variables:



SELECT @err = @@error, @rowc = @@rowcount

(For
this reason, I prefer to always use SELECT for variable assignment, despite
Microsoft's recommendations to use SET.)


In T-SQL it is not an
error if, for instance, an UPDATE statement did not affect any rows. But it
can of course indicate an error in your application, as it could be an error
if a SELECT returns more that one row. For these situations, you can check
@@rowcount and raise an error and set a return value, if @@rowcount is not
the expected value.


@@trancount



@@trancount is a global variable which reflects the level of nested
transactions. Each BEGIN TRANSACTION increases @@trancount by 1, and each
COMMIT TRANSACTION decreases @@trancount by 1. Nothing is actually committed
until @@trancount reaches 0. ROLLBACK TRANSACTION rolls back everything to the
outermost BEGIN TRANSACTION (unless you have used the fairly exotic SAVE
TRANSACTION
), and forces @@trancount to 0, regards of the previous value.


When you exit a stored procedure, if @@trancount does not have the same value
as it had when the procedure commenced execution, SQL Server raises error
266. This error is not raised, though, if the procedure is called from a
trigger, directly or indirectly. Neither is it raised if you are running with
SET IMPLICIT TRANSACTIONS ON.



More on Severity Levels

In this section we will look a little closer on the various severity levels.











0
Messages with
Level 0 are purely informational. A PRINT statement produces a message on
severity level 0. These messages do not set @@error. Most query tools prints only
the text part of a level 0 message.
1-9
These levels, too, are for informational messages/warnings. I cannot recall that
I have encountered this from SQL Server, but I've used it myself in RAISERROR at
times. Query Analyzer and SQL Management Studio prints the message number, the
level and the state, but not the procedure and line number for these messages.
10
This level does not really exist. It appears that SQL Server internally converts
level 10 to level 0, both for its own messages when you use level 10 in
RAISERROR.
11-16
These levels indicate a regular programming error of some sort. But it is not
the case that level 16 is more serious than level 11. Rather it appears to be a
somewhat random categorisation. Books Online gives no details on what the levels
might mean, but SQL Server MVP Jacco Schalkwijk pointed out to me that there is a
drop-down box in the dialog for defining alerts in Enterprise Manager and SQL

Management Studio which has description on these levels. Here is what the
drop-down box has to say:

11 – Specified Database Object Not Found

12 – Unused

13 – User Transaction Syntax Error

14 – Insufficient Permission


15 – Syntax Error in SQL Statements

16 – Miscellaneous User Error


My experience is that it may not always be this way, but there certain are
matches. Deadlock, for instance is level 13. (So now you know what a User
Transaction Syntax Error is!)


17-25

Messages with any of these severity levels indicate some sort of resource
problem (for instance running out of disk space), or internal error in SQL
Server, or a problem with the operating system or hardware. The higher the
severity, the more serious problems. These levels are documented in in the
setion Troubleshooting->Error
Messages
->Error Message Formats->Error Message Severity Levels
in Books Online.
19-25
To use level 19 or higher in RAISERROR you must use the WITH LOG option, and you
must have sysadmin rights.
20-25
Errors with these severity levels are so fatal, that they always terminate the
connection.

What Happens when an Error Occurs?


Many programming languages have a fairly consistent behaviour when there is a
run-time error. Common is that the execution simply terminates in case of an error,
unless you have set up an exception handler that takes care the error. In
other languages, some error variable is set and you have to check this
variable. T-SQL is confusing, because depending on what error that occurs and
in which context it occurs, SQL Server can take no less than four different actions.
I first
give an overview of these alternatives, followed by a more detailed discussion of which
errors that cause which actions. I then discuss two special cases: trigger
context and user-defined functions.


The Possible Actions


These are the four main possible actions SQL Server can take:

Statement-termination. The

current statement is aborted and rolled back. Execution continues on the next
statement. Any open transaction is not rolled back. @@error is set to the
number of the error. Since the statement is rolled back, this means that if you run an UPDATE statement
that affects 1000 rows, and for one row a
CHECK constraint is violated, none of the rows will be updated. But if the UPDATE
statement was part of a longer transaction, the effect of the preceding
INSERT, UPDATE or DELETE statements are not affected. You need to issue a

ROLLBACK TRANSACTION yourself to undo them.


Scope-abortion. The current scope (stored procedure, user-defined
function, or block of loose SQL statements, including dynamic SQL) is aborted,
and execution continues on the next statement in the calling scope. That is,
if stored procedure A calls B and B runs into a scope-aborting error,
execution continues in A, just after the call to B. @@error
is set, but the aborted procedure does not have a return value, but the
variable to receive the return value is unaffected. As for
statement-termination, any outstanding transaction is not affected, not even
if it was started by the aborted procedure.


Batch-abortion. The execution of the entire batch – that is, the block
of SQL statements that the client submitted to SQL Server – is aborted.
Any open transaction is rolled back. @@error is still set, so if you would
retrieve @@error first in the next batch, you would see a non-zero value. There
is no way you can intercept batch-abortion in T-SQL code. (Almost. We
will look a possibility using linked servers later on.)


Connection-termination. The client is disconnected and any open

transaction is rolled back. In this case there is no @@error to access.


One can note from this, that there are two things that cannot happen:


  • The transaction is rolled back, but execution of the current batch
    continues.

  • The batch is aborted, but the transaction is not rolled back.

But I like to stress that this is based on my own observations. I have found

no documentation that actually states that these two cases cannot occur under
any circumstances.


The above caters for most of the error situations in SQL Server, but since a hallmark of
the error handling in SQL Server is inconsistency, every now and then I
discover some new odd situation. I am overlooking these cases here, not to
burden the reader with too many nitty-gritty details.

There is however, one more situation you should be aware of and that is batch-cancellation.

The client may at any time tell SQL Server to stop executing the batch,
and SQL Server will comply more or less immediately. In this situation SQL Server will
not
roll back any open transaction. (In the general case that is. It seems
that if the T-SQL execution is in a trigger, when the cancellation request
comes, then there is a rollback.)
However, if the current statement when
the cancellation request comes in is an UPDATE, INSERT or DELETE statement, then

SQL Server will roll back the updates from that particular statement. Batch-cancellation may occur because an explicit call to a
cancellation method in the client code, but the most common reason is that a
query timeout in the client library expires. ODBC, OLE DB, ADO and ADO.Net all
have a default timeout of 30 seconds. (Which judging from the questions on the
newsgroups, many programmers believe to come from SQL Server, but
not so.)


When Does SQL Server Take which Action?


As you may guess, it depends on the error which action SQL Server takes, but not
only. Context also matters. One is the setting of the command SET XACT_ABORT,
which we shall
look at in a later section. A special case is

trigger context
, in which almost all errors
abort the batch and this will be the topic for the next section. Right now we
will discuss the default context, that is outside triggers and when the
setting XACT_ABORT is OFF.


You may
guess that the more severe the error is, the more drastic action SQL
Server takes, but this is only really true for connection-termination. When it
comes to scope-abortion, this occurs for a fairly well-defined family, but I
am not sure that I agree with that these errors are less severe than the errors that
abort the batch. And there is not really any clear distinction between
the errors that abort the batch on the one hand, and those that merely
terminate the statement on the other. For this reason, I will first cover
connection-termination, then scope-abortion and then the other two together.



Connection-termination



When SQL
Server terminates the connection, this is because something really bad
happened. The most common
reason is an execution error in the SQL Server process
itself,
e.g. an access violation (that is, attempt to access an illegal memory address), a
stack overflow, or an assertion error (a programmer-added check for a certain
condition that must be true for his code to work). It could also be a
protocol error in the communication between the client library and SQL Server.
These errors are normally
due to bugs in SQL Server or in the client library, but they can also appear due to hardware problems, network
problems, database corruption or severe resource problems.



SQL Server terminates the connection, because it would not be safe to
continue execution, as internal process structures may be damaged. In some
cases, not only is your connection terminated, but SQL Server as such
crashes.


Connection-termination can sometimes be due to errors in your application in
so far that you may have written some bad SQL that SQL Server could not cope
with. But in such case it is still an SQL Server bug if the connection
terminates, because you should get a proper error message. (The error messages in
conjunction with connection-termination are often very opaque.)



There is one
case, though, where a bug in application code can cause connection-termination on
its own, and that is if you have your written your own extended stored procedures or your own OLE objects that you call through the sp_OAxxxxx
procedures. An unhandled execution error in such code will terminate your
connection – and may crash SQL Server as well.


There is one way to terminate the connection from T-SQL: if you issue a RAISERROR statement with a severity
level >= 20. To do this you must provide WITH LOG, and you must be sysadmin.
Since errors with severities >= 19 may trigger an operator alert, and
eventually may alert someone's pager, don't do this just for fun.



Scope-abortion


This appears to be confined to compilation errors.
At least I have not seen it happen with any other sort of error. Due to the feature known as deferred name resolution (in my opinion
this is a misfeature), compilation errors can happen during run-time
too. Consider this example (you can run it in the Northwind database):


CREATE PROCEDURE inner_sp @productid int AS

CREATE TABLE #temp (orderid int NOT NULL,
orderdate datetime NOT NULL)

PRINT 'This prints.'
BEGIN TRANSACTION

INSERT #temp (orderid, orderdate)
SELECT o.OrderID, o.OrderDate
FROM Orders
WHERE EXISTS (SELECT *
FROM [Order Details] od
WHERE od.OrderID = o.OrderID
AND od.ProductID = @productid)

COMMIT TRANSACTION
PRINT 'This does not print.'
go
CREATE PROCEDURE outer_sp AS

DECLARE @ret int
SET @ret = 4711
EXEC @ret = inner_sp 76

PRINT '@@error is ' + ltrim(str(@@error)) + '.'
PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.'
PRINT '@ret ' + coalesce(ltrim(str(@ret)), 'NULL') + '.'
IF @@trancount > 0 ROLLBACK TRANSACTION
go
EXEC outer_sp
go


Because
the table #temp does not exist when you create inner_sp, SQL Server
defers examination of the entire INSERT-SELECT statement until run-time. Again, when you
invoke inner_sp, SQL Server cannot find #temp and defers building a query
plan for the INSERT-SELECT statement until it actually comes to execute the statement.
It is first at this point, that SQL Server discovers that the SELECT statement is incorrect (the alias for

Orders is missing). And at that precise point, the execution of
inner_sp
is aborted.
Here is the output:


This prints.
Server: Msg 266, Level 16, State 2, Procedure inner_sp, Line 18
Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION
statement is missing. Previous count = 0, current count = 1.
Server: Msg 107, Level 16, State 1, Procedure inner_sp, Line 9
The column prefix 'o' does not match with a table name or alias name used in the query.
Server: Msg 107, Level 16, State 1, Procedure inner_sp, Line 9

The column prefix 'o' does not match with a table name or alias name used in the query.
Server: Msg 107, Level 16, State 1, Procedure inner_sp, Line 9
The column prefix 'o' does not match with a table name or alias name used in the query.
@@error is 266.
@@trancount is 1.
@ret is 4711.


Note the next-to-last line in the output: inner_sp started a transaction.
But just because inner_sp was aborted does not mean that the transaction
was rolled back. When you implement you error handling, this is something you
need to consider, and I look closer at this in the
accompanying article on error
handling.

Also observe that @ret never was set, but retained the value it had prior to the

call.



Not all compilation errors passes unnoticed when SQL Server loads procedure. A
pure syntax error like a missing parenthesis will be reported when you try to create
the procedure. But the list of errors not detected because of deferred name
resolution is longer than you might expect. After
all, one would expect SQL Server be able to detect the missing alias even if #temp is
missing. With some effort, it could even detect the missing alias with the
Orders
table missing, couldn't it?


Actually, I can offer a way to
avoid this problem altogether. On

http://www.abaris.se/abaperls/
I have made available a toolset as
freeware that includes a load tool, ABASQL. Before creating a procedure,
ABASQL
extracts all temp tables in the procedure and creates them, so that SQL Server
will flag errors such as missing aliases or columns. ABASQL also checks the
SQL code for references to non-existing tables.


Statement-termination and Batch-abortion



These two groups comprise regular run-time errors, such as duplicates in unique
indexes, running out of disk space etc. As I have already have discussed, which
error that causes which action is not always easy to predict beforehand. This
table lists some common errors, and whether they abort the current
statement or the entire batch.










































































Error Aborts
Duplicate primary key.Statement
NOT NULL violation.Statement
Violation of CHECK or FOREIGN KEY constraint.Statement
Most conversion errors, for instance
conversion of non-numeric string to a numeric value.
BATCH
Attempt to execute non-existing stored
procedure.
Statement
Missing or superfluous parameter to stored
procedure to a procedure with parameters.
Statement
Superfluous parameter to a parameterless stored
procedure.
BATCH
Exceeding the maximum nesting-level of stored
procedures, triggers and functions.
BATCH
Being selected as a deadlock victim.BATCH
Permission denied to table or stored
procedure.
Statement
ROLLBACK or COMMIT without any active
transaction.
Statement
Mismatch in number of columns in
INSERT-EXEC.
BATCH
Declaration of an existing cursorStatement
Column mismatch between cursor declaration
and FETCH statement.
Statement.
Running out of space for data file or transaction log.BATCH
I am only able to make out a semi-consistency. Some real fatal errors after

which I would not really be interested in continuing execution do abort the
batch. The examples here are deadlock victim and running out of disk space. But why would
it be more
severe to pass a superfluous parameter to a parameterless one, than to one that has
parameters?
And conversion
errors? Are they more severe than a constraint violation? And why not all conversion errors?
(We will return to conversion errors, as well as
arithmetic errors that I purposely excluded from this table, when we discuss the SET
commands
ANSI_WARNINGS and ARITHABORT. They belong to the small
et of errors, where you have some sort of a choice.)

And don't look to severity levels for help. As noteed above, the severity

levels 11-16 is another classification, that don't
reflect any difference in severity. Most
of the errors above have severity level 16, but being a deadlock victim has severity
level 13.
(Running out of a disk space, which is a resource problem, is level 17.)


Trigger Context



You have trigger
context
when you are in a trigger, or you are in a stored procedure,
user-defined function or block of dynamic SQL that has been called directly
or indirectly from a trigger. That is, somewhere on the call stack, there is
a trigger. If you are in trigger context, all errors terminate the batch and
roll back the transaction on the spot. (Connection-terminating errors still
terminate the connection, of course.)


Well, almost. When it comes to error
handling in SQL Server, no rule is valid without an exception. Errors you
raise yourself with RAISERROR do not abort the batch, not even
in trigger context. Neither
does error 266, Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. This
error is simply not raised at all when this condition occurs in trigger context.
No, this is not a bug, but it is documented in Books Online, and according
to Books Online, error 266 is informational only. (Now, taste that
concept: an informational error.)



There is one more way that a trigger can terminate the batch. This happens if
@@trancount is 0 when the trigger exits. A trigger always
executes in the context of a transaction, since even if there is no
multi-statement transaction in progress each INSERT, UPDATE
and DELETE statement is its own transaction in SQL Server, and the trigger is part of
that transaction. Thus, @@trancount is at least 1 when you enter a trigger,
and if it is 0 on exit this means that somewhere has been a ROLLBACK
statement. (Or sufficiently many COMMIT to bring @@trancount to 0.)
Why this would have to abort the batch? Because the sky is blue.
Seriously, I don't know, but it has always been that way, and there is no way
you can change it.



The normal use for this is that if you have an integrity
check in a trigger you raise a message and roll back the transaction, as in
this example.


IF EXISTS (SELECT *
FROM inserted i
JOIN abainstallhistory inh ON i.inhid = inh.inhid
WHERE inh.ss_label <> i.ss_label
OR inh.ss_label IS NULL AND i.ss_label IS NOT NULL
OR inh.ss_label IS NOT NULL AND i.ss_label IS NULL)
BEGIN
ROLLBACK TRANSACTION
RAISERROR('Values on ss_label does not match abainstallhistory.', 16, 1)
RETURN
END



Thus, this trigger aborts the batch, not because of the RAISERROR, but
because of the ROLLBACK TRANSACTION, and since the trigger is permitted
to execute to end, the RAISERROR statement is executed.



Errors in User-Defined Functions



User-defined functions are usually invoked as part of a SET, SELECT, INSERT,
UPDATE
or DELETE statement. What I have found is that if an error appears in a
multi-statement table-valued function or in a scalar function, the execution of
the function is aborted immediately, and so is the statement the
function is part of. Execution continues on the next line, unless the error
aborted the batch. In either case, @@error is 0. Thus, there is no
way to detect that an error occurred in a function from T-SQL.


The problem does not appear with inline table-functions, since an inline
table-valued function is basically a macro that the query processor pastes into
the query.

You can also execute scalar functions with the EXEC statement. In this case,

execution continues if an error occurs (unless it is a batch-aborting error). @@error is set,
and you can check the value of @@error within the function. It can be
problematic to communicate the error to the caller though.


Control Over Error Handling



No, SQL Server does not offer much in this area, but we will look at the few
possibilities, of which the most important is SET XACT_ABORT ON.



SET XACT_ABORT



What I have said this far applies to when XACT_ABORT is OFF, which is the
default. When you issue SET XACT_ABORT ON, the very most of the
statement-terminating errors instead become
batch-aborting errors. Thus, if you
don't want to litter your T-SQL code with checks on @@error, and if you are not
interested in trying to recover from the error or invoke some error-logging
routine in T-SQL, but you are content with
execution
being aborted on first error, then XACT_ABORT is for you.



Beware, though, that even when XACT_ABORT is ON, not all errors terminate the
batch. Here are the exceptions I know of:



  • Errors you raise yourself with RAISERROR.

  • Compilation errors (which normally
    terminate the scope) do not terminate the batch.


  • Error 266, Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is
    missing
    .


So at a minimum you still need to check @@error after the execution of a
stored procedure or a block of dynamic SQL even if you use XACT_ABORT ON.



ARITHABORT, ARITHIGNORE and ANSI_WARNINGS


These three SET commands give you very fine-grained control for a very small set of
errors. When a division by zero or an overflow occurs, there are no less four
choices.



  • No action at all, result is NULL – when ARITHIGNORE is ON.


  • Warning message, result is NULL – when all are OFF.

  • Statement-termination – when ANSI_WARNINGS is ON.


  • Batch-abortion – when ARITHABORT is ON and ANSI_WARNINGS is OFF.

ARITHABORT and ARITHIGNORE also control domain errors, such as attempt to take

the square root of a negative number. But this is error is not covered by
ANSI_WARNINGS, so here you only have three choices.


As for what is an overflow, SQL Server has extended the domain of this error
to datetime value in a way which is not really intuitive. Consider these two statements:


select convert(datetime, '2003123')  -- This causes a conversion error
select @@error
go
select convert(datetime, '20031234') -- This causes an overflow
select @@error

Thus, if you have a string which conforms syntactically to some date format,
but some element is out of range, this particular form of conversion error
only aborts the batch with a certain setting – and in other settings it may not
cause an error at all.



ANSI_WARNINGS controls a few more errors and warnings. With ANSI_WARNINGS ON,
it is an error to assign a character or binary column a value that exceeds
the the maximum length of the column, and this terminates the statement. When
ANSI_WARNINGS is OFF, this condition is not an error, but the value is
silently truncated. The error is never raised for variable assignment.
Also, with ANSI_WARNINGS ON, if an aggregate function such as SUM() or
MIN() sees a NULL value, you get a warning message. (Thus it does not set
@@error, nor terminate the statement.)



When you use ODBC, OLE DB and Query Analyzer (SQL 2000), ANSI_WARNINGS is ON
by default. Since some features (indexed views, index on computed columns and
distributed queries) in SQL Server requires ANSI_WARNINGS to
be ON, I strongly recommend that you stick to this. Indexed views and index
on computed columns also require ARITHABORT to be ON, but I don't think you can rely on it being
ON by default.



Finally, I should mention that there is one more SET command in this area:
NUMERIC_ROUNDABORT. When ON, the batch is aborted if operation with a
decimal data type results in loss of precision. The option is OFF by default,
and it must be OFF for indexed views and indexes on computed columns to work.


RAISERROR WITH NOWAIT



SQL Server buffers the output, so an error message or a result set may not
appear directly at the client. In many cases, this is not an issue, but if you
are running a long-running procedure, you may want to produce diagnostic
messages. To have them displayed immediately in the client, you can use the
WITH NOWAIT clause to the RAISERROR statement, as in this example:


PRINT 'This message does not display immediately'
WAITFOR DELAY '00:00:05'
RAISERROR ('But this one does', 0, 1) WITH NOWAIT
WAITFOR DELAY '00:00:05'
PRINT 'It''s over now'

Once there is a message with NOWAIT, all that is ahead of the message in the buffer is
also passed to the client.


Unfortunately, there is a bug in SQL Server with NOWAIT, which affects you
only if you are calling a procedure through RPC (remote procedure call), so
that it this case, SQL Server buffers the messages nevertheless. RPC is the
normal way to call a procedure from an application (at least it should be),
but if you are running a script from OSQL or Query Analyzer, this bug does
not affect you.



Duplicates


Normally when you try to insert a value that would be a duplicate in a unique
index, this is an error and the statement is rolled back. However, the syntax
for the CREATE INDEX statement includes the option IGNORE_DUP_KEY. When this
option is in effect,
duplicates are merely discarded. The statement is not rolled back, and if the
INSERT statement compassed several rows, the rows that do not violate the
uniqueness of the index are inserted.


According to Books Online, SQL Server issues a warning when ignoring a duplicate
row. However, in real life the message has severity level 16,
and thus comes across to the client as an error. Nevertheless, SQL

Server does not set @@error, and as I noted the statement is not rolled back, this
message falls in none of four categories I have presented. Microsoft has
acknowledged the incorrect severity level as a bug, so hopefully this will be
fixed in some future version of SQL Server.


This option applies to unique indexes only. It is not available for PRIMARY
KEY
or UNIQUE constraints.


Using Linked Servers


There is no way to switch off batch-abortion on a general level. But there is
actually one way to handle the case in T-SQL, and that is through linked
servers. If you call a remote stored procedure, and the procedure runs into a
batch-aborting error, the batch in the calling server is not aborted. On
return to the local server, @@error holds the value of the error that aborted
the batch on the remote server, and the return value of the stored procedure is set to NULL.
(At least my tests indicate this. Thus, it is not the same case as when a
local procedure dies with scope-abortion, when the return value is not set at
all.) It goes
without saying, that this is a fairly lame workaround that is only applicable in
special situations.



Some notes:



  • It must be a truly remote server. If you call a procedure in the local
    server with four-part notation, SQL Server is too smart for you.

  • Set up the remote server with SQLOLEDB. When I set up the remote server
    with the OLE DB-over-ODBC provider (MSDASQL), the diagnostics about the
    error was poorer on the calling server.


Retrieving the Text of an Error Message


There is no supported way to retrieve the full text of an error message in
SQL 2000. You can get a text from master.dbo.sysmessages, but then
you only get placeholders for interesting things like which constraint that
was violated. To get the
full text of the error message in a proper way, you need a client to pick it up and log it.


However, Mark Williams pointed out you can retrieve the full mesage text from
within T-SQL with
help of DBCC OUTPUTBUFFER. To wit, after an error has been
raised, the messge text is in the output buffer for the process.


The output from DBCC OUTPUTBUFFER is a single colunm, where each row as a
byte number, a list of hex values, and a textual representation of the hex
values. Mark made the effort to extract message from the last part, and was kind to send me a stored procedure
he had written. As I looked at the output from DBCC OUTPUTBUFFER, I found a
byte that appeared to hold the length of the message, which helped me to
improve Mark's procedure.



There is a very significant restriction with this trick: to run DBCC OUTPUTBUFFER you need sysadmin rights
even to look at your own spid, so you cannot put this in an application that
is to be run by plain users.


Here is the stored procedure that Mark and I produced:


CREATE PROCEDURE showErrorMessage @errmsg nvarchar(500) OUTPUT AS
DECLARE @dbccrow nchar(77),
@msglen int,
@lenstr nchar(2),
@sql nvarchar(2000),
@s tinyint

-- Catch the output buffer.
CREATE TABLE #DBCCOUT (col1 nchar(77) NOT NULL)
INSERT INTO #DBCCOUT
EXEC ('DBCC OUTPUTBUFFER(@@spid)')

-- Set up a cursor over the table. We skip the first
-- row, because there is nothing of interest.
DECLARE error_cursor CURSOR STATIC FORWARD_ONLY FOR
SELECT col1
FROM #DBCCOUT
WHERE left(col1, 8) <> replicate('0', 8)
ORDER BY col1

-- Init variable, and open cursor.
SELECT @errmsg = ''
OPEN error_cursor
FETCH NEXT FROM error_cursor INTO @dbccrow

-- On this first row we find the length.
SELECT @lenstr = substring(@dbccrow, 15, 2)

-- Convert hexstring to int
SELECT @sql = 'SELECT @int = convert(int, 0x00' + @lenstr + ')'
EXEC sp_executesql @sql, N'@int int OUTPUT', @msglen OUTPUT

-- @s is where the text part of the buffer starts.
SELECT @s = 62

-- Now assemble rest of string.
WHILE @@FETCH_STATUS = 0 AND datalength(@errmsg) - 1 < 2 * @msglen
BEGIN
SELECT @errmsg = @errmsg + substring(@dbccrow, @s + 1, 1) +
substring(@dbccrow, @s + 3, 1) +
substring(@dbccrow, @s + 5, 1) +
substring(@dbccrow, @s + 7, 1) +
substring(@dbccrow, @s + 9, 1) +
substring(@dbccrow, @s + 11, 1) +
substring(@dbccrow, @s + 13, 1) +
substring(@dbccrow, @s + 15, 1)
FETCH NEXT FROM error_cursor INTO @dbccrow
END

CLOSE error_cursor
DEALLOCATE error_cursor

-- Now chop first character which is the length, and cut after end.
SELECT @errmsg = substring(@errmsg, 2, @msglen)


All sorts of disclaimers apply: we have tested this on a couple of
cases, and it seems to work. But it depends on undocumented behaviour. Use at
your own risk.


TRY-CATCH in SQL 2005


Next version of SQL Server,
SQL 2005, code-named Yukon, introduces significant improvements to the error
handling in SQL Server. Here is a simple example:


BEGIN TRY
SELECT convert(smallint, '2003121')
END TRY
BEGIN CATCH
PRINT 'errno: ' + ltrim(str(error_number()))
PRINT 'errmsg: ' + error_message()
END CATCH

The output is:



errno: 244
errmsg: The conversion of the varchar value '2003121' overflowed
an INT2 column. Use a larger integer column.


The construct is similar to error-handling concepts in languages like C++. If an error occurs in the TRY block,
or in a stored procedure called by the TRY block,
execution is transferred to the CATCH block. In the CATCH block, you have access
to six new functions: error_number(), error_severity(), error_state(),

error_message(), error_procedure() and error_line(), that gives you all parts
of the message associated with the error. And, yes, error_message(), is the
expanded message with the parameters filled in.



If you are in a transaction, and the error occurred is a
batch-abortion
error, your transaction will be doomed. This means
that you cannot commit or perform any more updates within the transaction –

you must roll back.


One caveat is that if you catch an error in this way, the
client will never see the error, unless you call RAISERROR in the error
handler. Unfortunately, you cannot reraise the exact error message, since
RAISERROR does not permit you to use error numbers less than 50000.


Client-side Error Handling


The various client libraries from which you can access SQL Server have
their quirks too. Some libraries are low-level libraries like DB-Library, ODBC

and the SQLOLEDB provider. Others are higher-level libraries that sit on top
of one of the low-level libraries, one example is ADO. If the low-level library
has some quirk or limitation, the high-level library is likely to inherit
that. The high-level library might also add its own quirks and limitations.


I am covering four libraries here: DB-Library, ODBC, ADO and ADO .Net,
although the first two I discuss very briefly, since most devleopers today
use ADO or ADO .Net.



DB-Library


When it comes to error handling, DB-Library is probably the best in the game.
When SQL Server produces a message – be that an error, a warning or just an
informational message such as a PRINT statement – DB-Library invokes a
callback routine, and in that callback routine you have full access to all
parts of the message: error number, severity level, state, procedure, line
number and of course the message itself. You can then set some global
variable to determine what should happen when you come back from the
DB-Library call that caused the error.


Unfortunately, Microsoft stopped developing DB-Library with SQL 6.5, and you
have poor or no support for new features in SQL Server with DB-Library. Thus,
I cannot but discourage you from using DB-Library.



ODBC


With ODBC, you have to rely on return-status values, and then retrieve the
error message yourself. Exactly how, I have to admit that I am bit foggy on
at this point. However, you do have access to all parts of the error message,
and you get all messages. This is evidenced by the fact that you get all this
information in Query Analyzer which connects through ODBC.


ADO


ADO is not that good when it comes to error handling. First, you don't have
full access to the error message. You only get the error number and the error
text. You do not get the severity level (so you don't know whether really
is an error at all), nor do you get state, procedure or line number. You do
get something called SQLState, which is a five-letter code, not related to SQL
Server but inherited from ODBC. Another problem is that you
do far from always get all error messages, as I will detail below.



The basic operation with ADO appears simple: You submit a command to SQL Server and if there is
an error in the T-SQL execution, ADO raises an error, and if you have set up
an error handler with On Error Goto, this is where you will wind
up. (On Error is for Basic-derived languages. In C++ I suppose you can use
try-catch, but I have not verified this.) You can retrieve all messages from SQL Server in the
Errors collection on
the Connection object.



But there are quite some surprises hiding here. One thing that
makes ADO complicated, is that there are so many ways that you can submit a
command and retrieve the results. Partly, this is due to that ADO permits you
to access other data sources than SQL Server, including non-relational ones.
Also, as your "command" you can simply provide a table name. Since this text is about
error handling with stored procedures in SQL Server, I disregard other possibilities. But
even if you want to invoke a stored procedure, there are a whole lot of
choices:




  • Which provider. You can use SQLOLEDB or MSDASQL (OLE DB over
    ODBC).
  • Cursor location. Server-side cursor or client-side cursor?
    (The concept of a cursor in this context confused me for a long time.
    Being an SQL programmer, I think cursors are bad and should be avoided.
    Eventually, I have understood that a client-side cursor is not really a
    cursor at all. You get the entire data to the client in one go. A
    Server-side cursor gets the data from the server in pieces, which may or
    may not involve an SQL cursor, depending on the cursor type.)


  • From which object to invoke the stored procedure. You can use the
    .Execute method of the Connection and Command objects or the
    .Open method
    of the Recordset object.


  • Command type. You can construct an EXEC command as a string and
    use adCmdText. You can also use adCmdText with ODBC syntax and supply
    parameters through the .Parameters collection. And you can use
    adCmdStoredProc to supply the name of a stored procedure and use the
    .Parameters collection.


  • Cursor type. Cursors can be forward-only, static, dynamic or
    keyset.

  • Lock type. You can choose between read-only, optimistic, batch
    optimistic and pessimistic.


And that's not really all.



What errors you see in your client code,
depends on which combination of all these parameters you use. I
developed a form, from which I could choose between these parameters, and then
I played with a fairly stupid stored procedure which depending on input could cause some errors, generate some
PRINT messages and produce some results sets. And there was a great
difference in what I got back. When I used SQLOLEDB and client-side cursors,
I did not get any of my two PRINT messages in my .Errors collection
if there were no errors, whereas
with SQLOLEDB and server-side cursors I got both messages. With MSDASQL, I
got the first PRINT message, but not the second, no matter the cursor
location.



If there were error messages, I did not always get all of them, but at least
one error was communicated and an error was raised in the VB code. However, there is a
gotcha here, or two depending on how you see it. The first gotcha is that if
the stored procedure produces one or more recordsets before the error
occurs, ADO will not raise an error until you have walked past those
preceding recordsets with .NextRecordset. This is not
peculiar to ADO, but as far as I know applies to all client libraries, and is
how SQL Server pass the information to the client. The only odd thing with ADO is that many programmers do not use

.NextRecordset
, or even know about it. I have also found that in some
situations ADO may raise an error and say that .NextRecordset is not
supported for your provider or cursor type.



The second gotcha is that your procedure may have more
recordsets than you can imagine. To wit, INSERT, UPDATE and DELETE statements generate
recordsets to report the rowcount, unless the setting NOCOUNT is ON.



Another irritating feature with ADO that I found, was that as soon there had been
an error in the stored procedure, all subsequent result
sets from the stored procedure were discarded. I could still tell from the
return value of the stored procedure that execution had continued. I have
found no combination where you can get the result sets that were produced after an error.

ADO also takes the freedom to make its own considerations about what is an error.
I found that ADO always considers division by zero to be an error, even if
both ARITHABORT and ANSI_WARNINGS are OFF. In this case, SQL Server merely
produces a warning, but ADO opts to handle this warning as an error. A good thing in my opinion. Of what I have found, this
only happens
with division by zero; not with arithmetic errors such as overflow.



Above I said that even if I did not get all errors from SQL Server, ADO would
raise an error. This is true as long as we are talking about commands you
submit yourself. But ADO can submit commands behind your back, and if they
result in errors, ADO may not alert you – even if the abort the batch and
thereby rollback any outstanding transaction. This ugly situation is described further
in
KB
article 810100
.



Finally, a note on the return value and value of output parameters from a
stored procedure. They are accessible from ADO, even if there is an error
during execution of the stored procedure (as long the error does causes the
procedure to terminate execution). If you
use a client-side cursor you can normally access them directly after
executing the procedure, whereas with a server-side cursor you must first
retrieve all rows in all result sets. (Which means that if .NextRecordset
is not supported for your cursor, you may not be able to retrieve the return
value.) Beware that if you try to retrieve these values too soon, you will
not be able to retrieve them even when you have retrieved all rows.



It is not really the topic for this text, but the reader might want to know
my recommendation of what to choose from all these possibilities. And I say that
you should use the SQLOLEDB provider (note that MSDASQL is the default),
client-side cursors (note that server-side cursors is the default), invoke
your stored procedures from the Command object, using adCmdStoredProcedure.
Not because this is the best for error handling, but this
appears to be the best from an overall programming perspective. (If you make
these choices you will get a static read-only cursor.)



ADO .Net


Note: this applies to ADO .Net 1.1. Since some behaviour I describe may be
due to bugs or design flaws, earlier or later versions of ADO .Net may be
different in some points.


To some extent, ADO .Net is much better fitted than ADO to handle errors and
informational messages from SQL Server, but unfortunately neither ADO .Net is
without shortcomings.



The ADO .Net classes can be divided into two groups. The disconnected classes
that are common for all data sources, and the connected classes that are
data-source specific, but.derived from a common interface. A group such of
connected classes makes up a .Net Data Provider and each provider has its own
name space. Three providers can connect to SQL Server: There
is SqlClient, which is specific to SQL Server, and there are the OLE DB and
ODBC .Net Data Providers that connect to anything for which there is an OLE
DB
provider or an ODBC driver. I will refer to them here as OleDb and Odbc,
as this is how their namespaces are spelled in the .Net Framework.



If the only data source you target is SQL Server, SqlClient is of course the
natural choice. As we shall see, however, there are situations where OleDb
may be preferrable. There is even the odd case where Odbc is the best choice,
but as I will detail later, you do best to avoid Odbc when connecting to SQL
Server.


The three data providers have some common characteristics when it comes to handling
of errors and messages from SQL Server, but there are also significant
differences. I will first cover the common features.



To invoke a stored procedure from ADO .Net, you need a Command object.
(SqlCommand, OleDbCommand or OdbcCommand). Normally you specify the CommandType as StoredProcedure and
provide the procedure name as the command text, but you can also use the

CommandType
Text and specify an EXEC statement.


There are four methods
that you can use to invoke a stored procedure from ADO
.Net, and I list them here in the order you are most likely to use them:














DataAdapter.FillFills a DataTable or a DataSet with the data from
the stored procedure. The are several overloaded Fill methods, some of
which permit
you to pass a CommandBehavior to specify that you want key or schema
information, or that you want only a single row or a single result set.
ExecuteNonQueryPerforms a command that does not return any result set (or if it does, you
are not interested in it). One example is a store procedure that updates data.
ExecuteReaderReturns a DataReader object, through which you can access the rows as
they come from SQL Server. If there are several result sets, you use .NextResult
to traverse them. This is the most general method to access data. Also here you can specify CommandBehavior.
ExecuteScalarUse this method to run a command that produces a result set of a single value.


To test the possible variations, I wrote a simple application in VB .Net,
from which I could pass an SQL command or a stored procedure, and select which data provider and which call method to use.
For most of the tests,
I used a procedure that depending on input parameters would
produce results sets, informational or error messages, possibly interleaved.
What follows is based on my observations when playing with this application.


If an error occurs during execution of a stored procedure, the method you
used to invoke the procedure will raise an exception. Thus, you should
always call these methods within a Try-Catch block, so that you can
handle the error message in some way. In the exception handler you have
access to a provider-specific Exception object with an ErrorCollection, that
containts information about the error. What information that is available is
specific for the provider.



If you are interested in informational messages, that is messages with a
severity ≤ 10, you can set up an InfoMessage event handler, which
you register with the Connection object. It seems, though, if there
are both errors and informational messages, that the informational messages
comes with the exception. In the event handler, too, you have access to the
ErrorsCollection
from where you can retrieve the individual messages.


As long as you stick to Fill, ExecuteNonQuery and

ExecuteScalar
, your life is very simple, as all data has been retrieved
once you come back, and if there is an error you wind up in your exception
handler. Thus, in difference to ADO, you don't have to bother about unexpected result sets
and all that. If you want the return value of a stored procedure or the
value of output parameters, these are available in the Parameters
collection. However, the OleDb and Odbc providers normally do not fill in
these values, if an error occurs during execution of a stored
procedure.


If you use ExecuteReader, there are a few extra
precautions. If the stored procedure first produces a result set, and then a
message, you must first call .NextResult before you get an exception,
or, for an informational message, any InfoMessage event handler is
invoked. In difference to ADO, ADO .Net does not produce extra result sets
for the rowcount of of INSERT, UPDATE and DELETE statements. However, under
some circumstances, errors and messages may give cause to extraneous result
sets.



Beware that if
.NextResult throws an exception, it does not return a value, so if you have
something like:


Do
....
Try
more_results = reader.NextResult()
Catch e as Exception
MsgBox(e.Message)
End Try
Loop Until Not more_results

more_results retains the value it had before you called .NextResult.
(Caveat: I'm not an experienced .Net programmer, but this is my observation.)

To
get the return value from a stored procedure and the value of output
parameters when you use ExecuteReader, you first have to retrieve all
rows and all result sets for these values to be available.


Just like ADO, ADO .Net can sometimes generate commands behind your back;
this appears mainly to happen when you use the CommandBehaviors

KeyInfo
and SchemaOnly. But in difference to ADO, ADO .Net
communicates any SQL errors from these extra commands, and throws an
exception in this case too.


So far, it may seem that ADO .Net is lot more well-behaving than ADO. To some
extent it is, but I will now will procede to the specifics for each data
provider, and this mainly deals with their respective shortcomings.



SqlClient


One very nice thing with SqlClient, is that the SqlError class includes
all components of an SQL Server message: server, error
number, message text, severity level, state, procedure and line number.


Another good thing with SqlClient, is that in difference to the other two
providers, you do almost always get the return value and the value of output
parameters from a stored procedure, even if there is an error during execution
(provided that the error does not terminate the execution of the procedure,
of course.).


But there are a couple of bad things too:




  • If the procedure produces more than one error, you only get one error
    message, unless you are using ExecuteNonQuery. This may be
    addressed by the fix described in
    KB 823679.

  • If the procedure produces an error before the first result set, you
    cannot access any data with any of the methods. (ExecuteReader does
    not even return a SqlDataReader object.) If you need to access data
    in this case, Odbc is your sole possibility.


  • If the stored procedure produces a result set, then an error, then
    another result set, there is only one way to retrieve the second result
    set: use ExecuteReader and be sure to have SET NOCOUNT ON. If you
    run with NOCOUNT OFF, things can go really bad, and data may linger on the
    connection and come back when the connection is reused from the pool.
    Eventually SqlClient may get stuck in an infinite loop or throw some
    nonsensical exception.

  • RAISERROR WITH NOWAIT does not work with
    ExecuteNonQuery
    ,
    but the messages are buffered as if there was no NOWAIT. Use
    any of the other methods, if you need RAISERROR WITH NOWAIT. (Note that to use
    NOWAIT; you must use CommandType Text, and a single
    unparameterized SQL string, due to a bug in SQL Server.)



OleDb


In an OleDbErrorCollection, you don't have access to all information
about the error from SQL Server, but only the message text and the message
number.


Notes on OleDb:



  • If there is an error message during execution, OleDb does in most
    situations not provide the return value of the stored procedure or the
    value of any output parameters.

  • If the procedure produces more than one error, you only get one error
    message if NOCOUNT is OFF. If NOCOUNT is ON, you may get all messages,
    unless there are result sets interleaved with the messages. For some
    reason the error messages comes in reverse order.


  • If the procedure produces an error before the first result set, you
    cannot access any data with any of the methods. (ExecuteReader does
    not even return a OleDbDataReader object.) If you need to access
    data in this case, Odbc is your sole possibility.

  • If the stored procedure produces a result set, then an error, then
    another result set, there is only one way to retrieve the second and
    successive result
    sets: use ExecuteReader and be sure to have SET NOCOUNT OFF.
    If you have NOCOUNT ON, you will still get a lot of result sets, but most
    of them will be empty.


  • RAISERROR WITH NOWAIT does not always work with
    OleDb, but the messages are sometimes buffered. I have not been able to
    find a pattern for this. For NOWAIT to work at all, you must use
    CommandType
    Text, because a bug in SQL 2000,



Odbc


In an OdbcErrorCollection, you don't have access to all information
about the error from SQL Server, but only the message text and the message
number.


Odbc has all sorts of problems with errors and informational messages. If
there are several informational messages, Odbc may lose control and fail to
return data, including providing the return value and the values of output
parameters of stored procedures. It does not matter whether you have declared
an InfoMessage event handler. If there are error messages, and you try
to retrieve data, you may get exceptions from the ODBC SQL Server
driver saying Function sequence error or Associated statement not
prepared
.



If there are error messages before any result sets are produced, Odbc may not
throw an exception for the first error message, but only invoke your
InfoMessage
event handler. And if you don't have one, you
will not even notice that there was an error. Under some circumstances more
than one error message may be dropped this way.


Some of these problems may go away if you run with SET NOCOUNT ON, but not
all. In general therefore, I'll advice against using the Odbc .Net Data
Provider to access SQL Server.


Still, there is one situation where Odbc is your sole choice, and that is if
you call a stored procedure that first produces an error message and then a
result set. The other two providers never return any data in this
situation. With Odbc you can do it – but it is a narrow path to follow.
You must have SET NOCOUNT ON. If you only have one result set, you can probably use OdbcDataAdapter.Fill. If there are more than one result set, you must use ExecuteReader, and you must specify the CommandBehavior SingleResult (!). You may get an exception about Function Sequence Error at the end, but by then you
have retrieved all your data.



Acknowledgements and Feedback

Thanks to Trevor Morris who pointed out the tidbit on IMPLICIT_TRANSACTIONS and error 266, Mark Williams who investigated DBCC OUTPUTBUFFER and SQL Server MVP Jacco Schalkwijk who found definition of the severity levels 11-16.

If you have technical questions that any knowledgeable person could answer, I encourage you to post to any of the newsgroups microsoft.public.sqlserver.programming or comp.databases.ms-sqlserver.For more articles error-handling in .Net, check out ErrorBank.com.

An SQL text by Erland Sommarskog, SQL Server MVP.

1 comment:

Anonymous said...

After a tripper to a casino in Chicago's Westerly a French-Canadian with moody tomentum became shady after he noticed she had red whisker and Grim eyes. [url=http://www.tasty-onlinecasino.co.uk/]casinos online[/url] uk online casino They will actually walk you through and through the fundamentals of the money precisely to be certain that a sealed casino offers the features they are looking for. http://www.onlinecasinoburger.co.uk/





Google