Comments posted to this topic are about the item Always Retry
Follow me on Twitter: http://www.twitter.com/way0utwest
Forum Etiquette: How to post data/code on a forum to get the best help
My Blog: www.voiceofthedba.com
People REALLY need to read the "A Word of Caution" section at the end of the linked article. I'll also add that if the retry is due to bad code or bad database configuration/improper usage (For example, very high insert rate for an ever-increasing clustered key with lots of indexes followed ExpAnsive updates on the data just inserted, which will cause instant and massive page-splits), the underlying problem actually needs to be fixed instead of some "perceived panacea patch" with the likes of automated retries.
My fear is that people will go with the "perceived panacea patch" instead of actually doing what's right to fix the real problem. Don't say it won't happen... look at how many people (virtually, the whole freakin' world, including me a long while back) adopted and continue to use a "Worst Practice" as a "Best Practice" for index maintenance.
Change is inevitable... Change for the better is not.
I'm actually kind of shocked that "idempotency" didn't come up in the article at all (it is buried within the retry guidance but is awfully easy to miss even there). Without knowing how your application/data/integration should react to multiple calls with the same content, implementing retries feels a bit like fishing with dynamite (might get the job done but usually with some nasty side effects).
As Jeff mentioned - understand WHY you might want to retry, and understand the contexts under which retrying something that failed previously might be safe and has a chance to be successful. In short - read the manual, understand what is possible, make sure that the outcome is a "safe" one, THEN consider leveraging automated retries.
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
This topic has me concerned with a couple queries I've been working on recently. I'm working on a Windows Presentation Foundation (WPF) app. We use Entity Framework (EF). (I know that EF has a bad rep within the DBA community, but trust me, we developers are still going to use it.) The query I've most recently been working on is complex. The spec requires me to return the values from a table, a couple of child tables and a grand-child table. The EF code works, but under certain circumstances it takes a long time to return results. There are reasons for this such as I'm running this in the Visual Studio debugger (any debugging operation is inherently slower than it will be in production). Also, when I've noticed this unacceptably slow behavior, I've encountered it while testing the app on my work laptop at home. (At this point I'm still working from home.) If I remote to my desktop in the office this behavior isn't as noticeable.
However, I am concerned it might be bad for any users who would be farther from our servers. This is hard for me to judge, but at least I get an idea of what it might be like while WFH. My home is about 70 miles from the servers. For my state I've got fast Internet connectivity, so it surprises me how long it takes the query to run. It doesn't time out, as in all the testing I've done over the last couple of months I think I might have encountered a timeout error only once. I'm also sure that a large part of it is we're using older versions of the .NET Framework, EF, etc. Upgrading would certainly help noticeably, but that's unlike to happen. There's no easy answer here.
Kindest Regards, Rod Connect with me on LinkedIn.
Viewing 4 posts - 1 through 3 (of 3 total)