January 19, 2014 at 11:25 pm
Comments posted to this topic are about the item Logging and Error Handling for SQL Stored Procedures
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
January 20, 2014 at 12:53 am
It seems the links (from the daily newsletter and from the site itself) do not work.
January 20, 2014 at 2:49 am
Hi
The article now works from the home page.
It is very comprehensive.
Regards
January 20, 2014 at 9:31 am
I appreciate the author's hard work.
I find implementing error handling in MS SQL Server to be insanely tedious. I have never seen a clean error handling implementation with T-SQL.
I wish the catch block could pass an ErrorObject to a sproc. I don't want each sproc to be responsible for logging messages. I want to just call a single sproc with an ErrorObject. I would have one sproc to log the error.
Oh yea, I also don't want my error messages getting rolled back... MS should implement "autonomous transactions" too.
Am I asking for too much?
January 20, 2014 at 9:33 am
This was a great article.
I've used a similar albeit simpler table for logging at two different companies over the past 10 years. The log file has been very useful for junior developers and summer interns since they can clearly see what is happening with their newly written code.
I also keep track of how many times a stored procedure has been run. This comes in handy for performance tuning. I can look over the past year and see which procedures are using the most time. It also makes it easier to track average, min, max, and standard deviation for a stored procedure's duration. It is useful to note if a procedure's run time has been increasing over time, and if so, what steps are taking longer.
January 20, 2014 at 11:21 am
Great article. Thanks for sharing.
-Mike
January 20, 2014 at 5:18 pm
First of all, thanks to all for reading the article and providing feedback.
I have to agree with Charles in that error handling in SPs can be a bit tedious and certainly could be improved. But for now, it is what it is and until Microsoft does choose to gift us with something better, we must play the hand they've dealt us.
One of the reasons I decided to try to "standardize" error handling for my case, was that different people were doing it all kinds of different ways. And since you could never be sure how well they tested it (on step failure I mean), you were never 100% sure that they tested to see whether it was actually working. The most common issue I was seeing was the lack of testing of @XACT_STATE for the -1 case, which can happen in the case of a deadlock (or some other boundary errors).
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
January 20, 2014 at 5:20 pm
rcnewcomb 53012 (1/20/2014)
I also keep track of how many times a stored procedure has been run. This comes in handy for performance tuning. I can look over the past year and see which procedures are using the most time. It also makes it easier to track average, min, max, and standard deviation for a stored procedure's duration. It is useful to note if a procedure's run time has been increasing over time, and if so, what steps are taking longer.
Indeed I've done something similar, even adding an emailed report that lists top 10 run times or such. Quite useful to make sure data growth is not impacting your SPs adversely.
Usually I also include an email notification that a particular batch job completed, along with various stats about the job (and importance=High when there's an error). I thought including that email template, while nice, was probably a bit too much to add to the article. I may do a follow up with that.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
January 21, 2014 at 3:55 am
hello, thank you for your work.
i'm still looking for a transaction independent way of logging the activity going in an arbitrary number of procedures regardless of their nesting level ; think it can be resolved only using clr.
January 21, 2014 at 10:23 am
This is a must have for anyone trying to create a T-SQL dominant ETL solution. Command logging is an absolute must. Good stuff, Dwain.
If you combine this solution with the command logging from PROJECT REAL, you have the goods! 😀
January 21, 2014 at 5:35 pm
allarmi and sneumersky:
I hope you found my ramblings useful. In the future, one can hope that error handling capabilities will be improved upon but for now it is what it is.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
January 22, 2014 at 11:38 am
Thank you very much for the article! I found it very useful. I do have a comment though. With the approach presented, we have to incorporate logging code within each of the individual stored procedures thus making them very bloated. In addition, pretty much all of the logging logic will be redundant and spread all over the place, so if we decide to change something about our logging process, it would be very problematic. Wouldn't it be better to have a dedicated stored procedure, which does the logging work and accepts a bunch of parameters? This way everything is more centralized.
Thank you!
January 22, 2014 at 1:05 pm
Misha_SQL (1/22/2014)
Thank you very much for the article! I found it very useful. I do have a comment though. With the approach presented, we have to incorporate logging code within each of the individual stored procedures thus making them very bloated. In addition, pretty much all of the logging logic will be redundant and spread all over the place, so if we decide to change something about our logging process, it would be very problematic. Wouldn't it be better to have a dedicated stored procedure, which does the logging work and accepts a bunch of parameters? This way everything is more centralized.Thank you!
Exactly my thoughts on this.
I guess this is why I'm against moving my SQL to SP. I run them now with a bat file and I direct the output to the same file name in the same folder only with .log at the end. I get the same type of information without the clutter.
-------------------------------------------------------------
we travel not to escape life but for life not to escape us
Don't fear failure, fear regret.
January 22, 2014 at 5:15 pm
below86 (1/22/2014)
Misha_SQL (1/22/2014)
Thank you very much for the article! I found it very useful. I do have a comment though. With the approach presented, we have to incorporate logging code within each of the individual stored procedures thus making them very bloated. In addition, pretty much all of the logging logic will be redundant and spread all over the place, so if we decide to change something about our logging process, it would be very problematic. Wouldn't it be better to have a dedicated stored procedure, which does the logging work and accepts a bunch of parameters? This way everything is more centralized.Thank you!
Exactly my thoughts on this.
I guess this is why I'm against moving my SQL to SP. I run them now with a bat file and I direct the output to the same file name in the same folder only with .log at the end. I get the same type of information without the clutter.
Thank you for that input. Others are probably wondering the same thing.
Suppose you create a SP to call when you need to do the logging INSERT. The minimum number of parameters to that SP is going to be all of the columns in the INSERT statement. You may want to include a couple of additional control parameters, and maybe some OUTPUT parameters as well.
Now let's assume that something about your logging changes. Perhaps you add a column to the ProcessLogs table. You'll still need to seek out all of the EXEC statements for your SP and modify them to include the additional columns. Sounds to me like it is still a lot of work. Additionally you may still have to add the code to put whatever into those columns as appropriate to the batch process.
At least, the way the INSERT statements explicitly reference all of the table's columns, none of them will fail as long as you add columns that support NULLs.
It is possible (although I can't say for certain), that any change to a logging process, will involve changes wherever you've referenced the SP.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
January 23, 2014 at 7:38 am
I just added this to my ETL system for reporting as I was looking to add a new one when you wrote the article. I had a basic one before hand, but nothing as detailed per step, row counts or error logging just yet. This worked right out the box and was easy to implement. I'm already making tweaks to the code and conforming some of my own templates. I even built a warning system to log when counts are lower or missing being I'm using this for building a report. That way we know at the ground level when problems occur in my batch processing.
Thanks for the article!
Viewing 15 posts - 1 through 15 (of 28 total)
You must be logged in to reply to this topic. Login to reply