Logging and Error Handling for SQL Stored Procedures

  • xsevensinzx (1/23/2014)


    I just added this to my ETL system for reporting as I was looking to add a new one when you wrote the article. I had a basic one before hand, but nothing as detailed per step, row counts or error logging just yet. This worked right out the box and was easy to implement. I'm already making tweaks to the code and conforming some of my own templates. I even built a warning system to log when counts are lower or missing being I'm using this for building a report. That way we know at the ground level when problems occur in my batch processing.

    Thanks for the article!

    Thanks for the excellent description of what additional capabilities exist by utilizing the log file.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • Thanks for the article, Dwain! Just last week I added a logging table for executing stored procs, mainly because I have a pair of do/undo procs that can be run by users and I want to know if undos are run, but I hadn't figured out a way to monitor step progress within an SP, and this gives me an excellent step in the right direction!

    Though I never worked in a mainframe environment, most of my education was on one, and I loved working in JCL! I, too, view SPs as batch jobs.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • Wayne West (4/9/2015)


    Thanks for the article, Dwain! Just last week I added a logging table for executing stored procs, mainly because I have a pair of do/undo procs that can be run by users and I want to know if undos are run, but I hadn't figured out a way to monitor step progress within an SP, and this gives me an excellent step in the right direction!

    Though I never worked in a mainframe environment, most of my education was on one, and I loved working in JCL! I, too, view SPs as batch jobs.

    Glad you found it useful. All I can say is that it has served me well (you could say I eat my own dog food.)


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • It is interesting to note that SQL MVP Erland Sommarskog has written a series of articles on error handling in T-SQL.

    Error and Transaction Handling in SQL Server

    That's just the first of three parts (other links are in his blog). He goes into much more detail than I do.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • allarmi (1/21/2014)


    hello, thank you for your work.

    i'm still looking for a transaction independent way of logging the activity going in an arbitrary number of procedures regardless of their nesting level ; think it can be resolved only using clr.

    You need a CLR which creates its own connection to the database to perform logging outside the caller's transaction. Not too difficult with Windows authentication.

  • david.wright-948385 (9/18/2015)


    allarmi (1/21/2014)


    hello, thank you for your work.

    i'm still looking for a transaction independent way of logging the activity going in an arbitrary number of procedures regardless of their nesting level ; think it can be resolved only using clr.

    You need a CLR which creates its own connection to the database to perform logging outside the caller's transaction. Not too difficult with Windows authentication.

    You could also use OPENROWSET which creates a new connection but in both cases scalability should be considered due to the overhead of opening a new connection for every log message. If used liberally that could amount to tons of connections being created and destroyed for something that should be mostly in the background and lightweight, i.e. logging.

    I would like to see autonomous transactions be added to the engine to solve this riddle and also make some scenarios where savepoints are used much easier to write.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Orlando Colamatteo (9/18/2015)

    You could also use OPENROWSET...

    As you say, OPENROWSET is very inefficient. A CLR is efficient as long as the connection object is cached.

  • Orlando Colamatteo (9/18/2015)


    I would like to see autonomous transactions be added to the engine...

    That would be ideal.

  • Interesting article; thanks for writing it.

    I work a lot with dimensional data warehouses, and use SP’s for all of my ETL work. I do logging very similar to this: I never thought to log a SQL_VARIANT, that idea will have to go into my “to review” list.

    I end up using a simpler logging structure, but in fact use 3 log tables.

    Job: a single record for each SP. Primary key is IDENTITY based, and is used by the other 2 logs for recording activity.

    Job details: contains the “step by step” activity, and errors (if encountered) for the job.

    Data Integrity: records (at a business key / time grain) any changes to the facts that I encounter: not the initial fact being inserted, but if any facts change upon re-running the ETL.

    Any “time” fields used in the logging are DATETIMEOFFSET(7), and along with the standard error collection fields you have (I call them event fields and reuse them for other features), I have a couple of general “information” columns (like “remarks”).

    One design difference I have (compared to your setup) is I log the activity into a table variable (with a defined table type like you did), but I pass that to a SP that stores it in the log table (via MERGE). I might have several steps (or records) in the logging variable before I get it into the log. By using a SP to do the logging, I can record detail activity using an INSERT / SELECT statement instead of just the variables.

    For example, with very minor exception I use MERGE on my fact tables and capture the OUTPUT into a “data integrity” table variable. I then run a COUNT against that table variable, with the GROUP BY on the $ACTION field (INSERT, UPDATE, DELETE). In that way I record (in details) the record count of the ETL, including 0 if no new data arrived.

    As a final step in my ETL code, I’ll check to see if any updates (or deletes, depending) occurred in the facts. If so I’ll log the “business key” level data into a “data integrity” log table, and set a WARNING state for the SP.

    Beer's Law: Absolutum obsoletum
    "if it works it's out-of-date"

  • Charles Sommers (1/20/2014)


    ...

    I wish the catch block could pass an ErrorObject to a sproc. I don't want each sproc to be responsible for logging messages. I want to just call a single sproc with an ErrorObject. I would have one sproc to log the error. ...

    You can always write your own stored procedure to accomplish this.



    Alvin Ramard
    Memphis PASS Chapter[/url]

    All my SSC forum answers come with a money back guarantee. If you didn't like the answer then I'll gladly refund what you paid for it.

    For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • david.wright-948385 (9/18/2015)


    A CLR is efficient as long as the connection object is cached.

    That would take some additional engineering to avoid creating a bottleneck and also to release unused cached connections...like implementing your own Connection Pool :Whistling:

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Let me just add here that there are more ways to do logging than you can shake a stick at. Like many other things in the software development arena, some people will prefer one method over another and likely they'll have good reasons for doing so.

    The thing that levels the playing field though is that logging is a must. No matter how good we are at programming, now matter what language is your choice of poison, no matter how pressed for time in the development schedule we are, logging one sort or another is a must. So my advice is just pick a method and do it, but don't keep reinventing it all the time.

    Eventually, your data will do you wrong, regardless of how well you've analyzed what underlies the processing you intend. In those case, you must fall back on something to get you out of a bind at 1AM in the morning. Sleepy guesses at that point simply don't cut it.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • Orlando Colamatteo (9/18/2015)


    david.wright-948385 (9/18/2015)


    A CLR is efficient as long as the connection object is cached.

    That would take some additional engineering to avoid creating a bottleneck and also to release unused cached connections...like implementing your own Connection Pool :Whistling:

    Absolutely: I was assuming low volume. It's not a big job, and throughput will dictate how necessary it is.

  • Good article, thank you very much.

Viewing 14 posts - 16 through 28 (of 28 total)

You must be logged in to reply to this topic. Login to reply