Ignoring error and continue

  • roger.plowman

    SSChampion

    Points: 10243

    TRY/CATCH doesn't catch everything. In fact it won't catch most of the things you really want it to so I simply avoid it completely. Nothing worse than an unreliable language element (of which T/SQL has scads).

    In terms of defensive programming you will get a LOT more bang for the buck anticipating the error and testing environmental conditions rather than catching and handling errors. Performs better too!

    My work is typically a .NET front end and SQL back end, so I favor having stored procedures fail and letting the front end handle the error. T/SQL really isn't very good at complex code so where possible I put things in a transaction and use SET XACT_ABORT ON so rollback is automatic when the SP fails. But that's the safety net, I try to make sure of my assumptions and the environment before getting to the meat of the code.

    If, on the other hand, you are in a pure T/SQL environment it's even more important to establish environmental assumptions before trying to execute DML statements. For example, making sure a table exists before trying CRUD operations on it.

  • schleep

    SSChampion

    Points: 12551

    Declare @msg varchar(100)

    DECLARE db_cursor CURSOR FOR

    SELECT Name FROM Alldatabases

    WHERE OBJECT_ID(Name + 'dbo.<tablename>') IS NOT NULL

    ORDER BY Name

    OPEN wh_cursor

     

    In your post, you had 2 tables, Settings and Employee, so maybe you want to check both exist...

  • Steve Collins

    Ten Centuries

    Points: 1105

    roger.plowman wrote:

    TRY/CATCH doesn't catch everything. In fact it won't catch most of the things you really want it to so I simply avoid it completely. Nothing worse than an unreliable language element (of which T/SQL has scads).

    Could you post code which reproduces the inconsistency?

    roger.plowman wrote:

    In terms of defensive programming you will get a LOT more bang for the buck anticipating the error and testing environmental conditions rather than catching and handling errors. Performs better too!

    Unless the SQL fails more often then it succeeds how could this be true?

    roger.plowman wrote:

    My work is typically a .NET front end and SQL back end, so I favor having stored procedures fail and letting the front end handle the error.

    No SQL Server error logging at all, zero?  What about when the front end is run by a different group of people?  Those folk would like to know why the database is crushing the UX?  It's OK to have no answer?

    roger.plowman wrote:

    T/SQL really isn't very good at complex code so where possible I put things in a transaction and use SET XACT_ABORT ON so rollback is automatic when the SP fails. But that's the safety net, I try to make sure of my assumptions and the environment before getting to the meat of the code.

    It's highly dependent on the competence of the SQL developer.  XACT_ABORT only performs rollback up to the first COMMIT.   The OP was not asking for a complete rollback but only of the proc with the error (and then continue execution).  I would have to test it but in hindsight it does seem like XACT_ABORT belongs in test_proc_b and not test_proc_a.  To get a complete rollback of 'test_proc_a' would require nested transactions or another approach.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Steve Collins

    Ten Centuries

    Points: 1105

    schleep wrote:

    Declare @msg varchar(100)

    DECLARE db_cursor CURSOR FOR

    SELECT Name FROM Alldatabases

    WHERE OBJECT_ID(Name + 'dbo.<tablename>') IS NOT NULL

    ORDER BY Name

    OPEN wh_cursor

    In your post, you had 2 tables, Settings and Employee, so maybe you want to check both exist...

    Per Steve's post the WHERE clause is too late to test for the existence of a table.  If you wanted to do something like this it would have to be in the FROM clause.  Have mercy on the next guy tho!  Something like this could be done, but it's not even remotely readable imo

    select * 
    from
    (select getdate() dt) s
    cross join
    (select isnull(object_id(N'dbo.foo', N'U'), 1) oid) x
    where x.oid=0;

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • roger.plowman

    SSChampion

    Points: 10243

    Steve Collins wrote:

    roger.plowman wrote:

    TRY/CATCH doesn't catch everything. In fact it won't catch most of the things you really want it to so I simply avoid it completely. Nothing worse than an unreliable language element (of which T/SQL has scads).

    Could you post code which reproduces the inconsistency?

    Not code, no, but here's what MS Docs has to say about it, scroll down to "Errors Unaffected by a Try/Catch Construct"

    https://docs.microsoft.com/en-us/sql/t-sql/language-elements/try-catch-transact-sql?view=sql-server-ver15

    Steve Collins wrote:

    roger.plowman wrote:

    In terms of defensive programming you will get a LOT more bang for the buck anticipating the error and testing environmental conditions rather than catching and handling errors. Performs better too!

    Unless the SQL fails more often then it succeeds how could this be true?

    I was speaking about the execution time of testing for environment (like the presence of a table) vs. letting the error occur. There is significant overhead with a try/catch that doesn't occur with the test. Depending on how many databases the OP has that generate errors the difference might be significant.

    Steve Collins wrote:

    roger.plowman wrote:

    My work is typically a .NET front end and SQL back end, so I favor having stored procedures fail and letting the front end handle the error.

    No SQL Server error logging at all, zero?  What about when the front end is run by a different group of people?  Those folk would like to know why the database is crushing the UX?  It's OK to have no answer?

    No, the front end will handle the logging (into a dedicated table in the database). This also allows "user friendly" annotations to be added. If developers want the raw SQL error they can always use SSMS and call the SP directly. When an SP fails it always returns an error to the front end, at least with .NET.

    Steve Collins wrote:

    roger.plowman wrote:

    T/SQL really isn't very good at complex code so where possible I put things in a transaction and use SET XACT_ABORT ON so rollback is automatic when the SP fails. But that's the safety net, I try to make sure of my assumptions and the environment before getting to the meat of the code.

    It's highly dependent on the competence of the SQL developer.  XACT_ABORT only performs rollback up to the first COMMIT.   The OP was not asking for a complete rollback but only of the proc with the error (and then continue execution).  I would have to test it but in hindsight it does seem like XACT_ABORT belongs in test_proc_b and not test_proc_a.  To get a complete rollback of 'test_proc_a' would require nested transactions or another approach.

    SQL Server doesn't support nested commits, unfortunately. I wish it did. In the case of wanting the code to continue after an "error" my suggestion would be encase that section of code inside the environmental test and only execute it if the test was successful. Avoids errors completely, allows you to ignore missing tables and the like, etc.

    In cases where you want to ignore errors instead of abort the SP completely the environmental test (IMO) is the easiest, cleanest, and least complex approach.

     

  • Steve Collins

    Ten Centuries

    Points: 1105

    Thanks Roger.  Regarding "Errors Unaffected by a Try/Catch Construct" the issues are mitigate-able (imo (all of what follows)) if handled properly (knock on wood).  I must be biased tho because I don't see an alternative to Try/Catch functionality and my projects rely on it extensively, full disclosure.  My disconnect must be because the only clients directly connecting to the database(s) I deal with are a bunch of .NET APIs.  No web sites or anything else (besides a few internal tools) connect directly to the database(s).  What you say about always returning an error in .NET is true.  Maybe the client connects directly to the database and so we just are describing the same endpoint differently.  Or it's the same!  It's hard to articulate because everything depends on so many things.  From the Docs [which I switched from bullet points to #s] the non-catch-able errors listed are:

    TRY...CATCH constructs do not trap the following conditions:

    1. Warnings or informational messages that have a severity of 10 or lower.
    2. Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error occurs that has severity of 20 or higher and the database connection is not disrupted, TRY...CATCH will handle the error.
    3. Attentions, such as client-interrupt requests or broken client connections.
    4. When the session is ended by a system administrator by using the KILL statement.

    #1 is just ignored which is ok because the db likely delivered the correct result.

    #2 the database engine is FUBAR.  In this case the APIs have health checks which would email me (knock on wood).

    #3 this  happens all the time between the APIs and the clients.  Things happen especially with phone apps.  Between .NET and SQL Server (which is an isolated connection) this is hopefully very infrequent.  Things happen tho.   Hopefully health checks catch this too.  The apis continually ping the db engine.

    #4 it would be me who was doing the killing 🙂

    Then the second set of things it TRY/CATCH doesn't catch:

    The following types of errors are not handled by a CATCH block when they occur at the same level of execution as the TRY...CATCH construct:

    1. Compile errors, such as syntax errors, that prevent a batch from running.
    2. Errors that occur during statement-level recompilation, such as object name resolution errors that occur after compilation because of deferred name resolution.
    3. Object name resolution errors

    The code I posted checked for the existence of the table in test_proc_b and interrupts execution by throwing an error prior to the required resolution of the object (table_t or whatever).  My working example tho didn't work because the code that checked @sys_table_name was commented out.  If that code were not commented out then it would've worked I think.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • schleep

    SSChampion

    Points: 12551

    scdecade,

    Trust me. this works to eliminate the DBs where the table to be updated doesn't exist from the cursor that is supplying the @db_name.

    SELECT Name FROM sys.databases

    WHERE OBJECT_ID(Name + '.dbo.<tablename>') IS NOT NULL

    Unless you have a table called '<tablename>' in some database, this will return no rows, nor error.

    If you have a table called '<tablename>', you're doing it wrong 🙂

    P

    • This reply was modified 3 months, 1 week ago by  schleep. Reason: Edited to add missing '.' before 'dbo.'
  • Steve Collins

    Ten Centuries

    Points: 1105

    schleep, that sounds interesting.  It's a 3 part naming convention?  Is there a . missing before dbo.?  Maybe it's simpler to query the sys table and idk.  It could be done without the inner proc and inside a loop and you're saying test it that way and it works.  Basically don't throw the error but throwing the error let's you write the error message.  So idk and it all depends.  Got code for the OP?

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • schleep

    SSChampion

    Points: 12551

    Since all operations inside the loop are in a single DB, USE @db_name makes the code a lot easier on the eyes.

    SET NOCOUNT ON, only to print out a custom rows affected message? Why?

    DECLARE @db_name VARCHAR(100), 
    @qry VARCHAR(500),
    @Cursor CURSOR

    SET @cursor = CURSOR FOR

    SELECT Name FROM alldatabases

    WHERE OBJECT_ID(Name + '.dbo.employee') IS NOT NULL
    AND OBJECT_ID(Name + '.dbo.Settings') IS NOT NULL

    OPEN @Cursor

    FETCH NEXT FROM @Cursor INTO @db_name

    WHILE @@FETCH_STATUS = 0
    BEGIN

    SET @qry = 'USE ' + @db_name +
    ' Declare @days INT select @days=LongValue from dbo.Settings
    where ID = ''100''
    PRINT @days
    update dbo.employee
    set isactive = 0
    where abs(datediff(day, GETDATE(),LoginDate)) > @days'

    EXEC(@qry)

    FETCH NEXT FROM @cursor INTO @db_name
    END
    CLOSE @Cursor
    DEALLOCATE @Cursor
  • Steve Collins

    Ten Centuries

    Points: 1105

    Ok now it makes sense to me.   For some reason I thought the object_id() was going in the query and not the cursor.  In Azure SQL it's not allowed to switch between db's using USE.  More of my myopia.  To do something like this in Azure SQL I think would require different connection strings to the db's in .NET and put the loop there too.  It's the same myopia with the error handling.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Steve Collins

    Ten Centuries

    Points: 1105

    roger.plowman wrote:

    Steve Collins wrote:

    roger.plowman wrote:

    My work is typically a .NET front end and SQL back end, so I favor having stored procedures fail and letting the front end handle the error.

    No SQL Server error logging at all, zero?  What about when the front end is run by a different group of people?  Those folk would like to know why the database is crushing the UX?  It's OK to have no answer?

    No, the front end will handle the logging (into a dedicated table in the database). This also allows "user friendly" annotations to be added. If developers want the raw SQL error they can always use SSMS and call the SP directly. When an SP fails it always returns an error to the front end, at least with .NET.

    Ah ok interesting.  In some systems I work with the logging and error messaging are handled similarly.  In the past 6 weeks we migrated (mostly) to a new system that logs using Application Insights onto blob storage onto ... [other].  In the new system "user friendly" annotations are handled by escalating the error number of exceptions intentionally thrown in SQL code.  There's a "threshold value" setting in the data access framework above which error messages are sent directly (from SQL) to the end client.  This gets rid of any dependency on "matching code" being required to handle custom error messaging from the results of stored procedures.  These 2 try/catch produce different error numbers when the error is accessed from system functions within the catch block.

    begin try
    throw 50000, 'The default error level is 50000', 1;
    end try
    begin catch
    select error_number(), error_message();
    end catch

    begin try
    throw 61000, 'This could be sent to the client', 1;
    end try
    begin catch
    select error_number(), error_message();
    end catch

    In the data access framework in C# there's a method that switches the error message depending on environment "mode" and "ErrorThreshold" level.

    private string ErrorMessage(string mode, int errorThreshold, ErrorResult errorResult, string httpMethod)
    {
    var errorMessageDefault = _options.DefaultErrorMessages[httpMethod];

    switch (mode)
    {
    case "Default":
    return errorMessageDefault;
    case "Passthrough":
    return errorResult.ErrorNumber.Equals(_options.ErrorThreshold) ? errorResult.ErrorMessage : errorMessageDefault;
    case "Debug":
    return errorResult.ErrorMessage;
    default:
    return errorMessageDefault;
    }
    }

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

Viewing 11 posts - 16 through 26 (of 26 total)

You must be logged in to reply to this topic. Login to reply