Concurrent instances of stored procedures

  • Hi All - I am trying to understand a perhaps basic issue. I have a requirement to run a stored procedure that does a lot of heavy processing. The data is such that multiple instances of this SP can be run concurrently so that each instance can take a piece of the table and complete its processing. This way, the processing can be split into multiple instances instead of all processing being done by one instance only. I am trying to understand what are the factors that limit the number of instances I can run simultaneously? Surely, after a point, increasing the no. of instances will not increase performance. Is it the number of CPUs, free threads, or something else? Is there any way to estimate the optimum no. of instances to run?

  • Why are you trying to split up the processing? SQL Server will take care of that for you using parallelism. If that isn't working fast enough, post the procedure and someone here will be able to help you optimize it to work better.

    As for how many instances of a procedure can run? It depends on a lot of factors, e.g. memory, cpu, disk, i/o, etc...

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • I have a requirement to run a stored procedure that does a lot of heavy processing.

    I agree with Jeffrey above. I'll add that anytime I see the words "heavy processing", I think "tuning/rewriting for performance".

    At a previous company, they tried everything to get their "heavy lifting" procs to run faster... they changed from the Standard Edition to the Enterprise Edition. They changed from a 4 CPU box to a monster 16 CPU box. They changed from a simple RAID array to a full blown EMC SAN. They tried "concurrent" processing. It all helped... a little. They still had many, many jobs that took 8 to 12 hours to sometimes fail.

    One job in particular was a "simple" dupe check that was supposed to check across 93 daily databases (don't look at me, I didn't design the bloody thing :-P) of about 4 million rows each (call records, to be specific). It would usually take 10 to 12 and sometimes as much as 24 hours to sometimes (many times, actually) fail even with all the wonderful hardware and software they spend mega-bucks on. To make matters worse, they were actually in violation of PUC regulations because they had to run the job in a single night (PUC reporting requirements) and it took so long, they could only run the job on 62 (2 months worth) of the databases. That was for month end runs. They also had to run it daily and it took at least 45 minutes and even it would fail many times a day.

    They simply didn't do what was necessary... rewrite the horrible code.

    The Manager of the Billing Department (he knew much more about the underlying data than I did) and I got together. He wrote out a description and I wrote the code. The first time they ran a (previously 45 minute) daily run, they thought it had failed. When I asked why, they said it was because it only ran for about 3 seconds. When I told them to check the data, they found it had run correctly in those 3 seconds.

    When they ran the 10-24 hour month end run, it ran in only 11 minutes and that was on all 93 databases. That was 3 years ago and the code hasn't failed yet. We did the same thing to many other 8 to 10 hour runs. It was a fair bit of work but worth every moment. We went from a dozen 8 to 10 hour failures per day (and 640 deadlocks per day) to no failures (and almost no deadlocks) and the night operators actually had time to do maintenance instead of making panic judgement calls.

    The moral of the story is... stop wasting your time trying to trick the system into doing something it can't using cheap and usually ineffective tricks like concurrent runs and the like. Instead, spend the time rewriting the code using all the best practices you can muster up. Don't try to save the old code because you'll fall into the same traps the original writers did. It's like painting over bad wood... it'll actually take longer than replacing the bad wood with good. Redesign and rewrite the code. Even a bad database design can be redeemed with excellent code.

    One thing to watch for is "joined UPDATEs". If the target of the update isn't in the from clause and depending on when parallelism occurs and has just the right indexes, a 2 minute run can slam 4 CPU's into the wall for 2 hours when the code recompiles for every row in every table it touches. It's a very common problem and most people don't recognize the problem even when they look at code that they've identified as the problem.

    There are many other classic errors to watch for... over use of indexes (can cause huge numbers of reads on INSERTs and UPDATES), aggregated views of aggregated views, inappropriate use of views (usually, some criteria against a calculated or aggregate column), scalar functions, functions that use functions, inappropriate use of recursion (as bad or worse than most cursors), triangular joins, accidental cross joins (developers usually try to "fix" these by using DISTINCT), inappropriate use of Cursors and While Loops (neither are bad when used appropriately... a lot of people don't know when they're appropriate, though), non-sargable join and filter criteria, inappropriate use of tables (using a high hit ratio OLTP table in conjunction with batch processing), inappropriate use of dynamic SQL (dynamic SQL is NOT bad or evil thing... just the way some people use it is), trying to do everything in a single query, and on and on and on. And I haven't even touched database design, inappropriate use of ORMs, or poorly written/conceived embedded code, yet.

    Fix the code... that's were the performance is.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks to you for your insight. The requirement is to process data in a 300 million record table. This is a daily process and one of several such processes. Based on the values in various fields, records need to be processed in different ways by applying logic encapsulated in other stored procedures. I would love to avoid cursor based row-by-row processing. But due to the large number of conditions it does not seem avoidable. Hence the idea of distributing the processing amongst several instances wherein each instance could pick up a chunk of the table and process those rows. Can you please share some ideas on the best way to approach this requirement? I can't post the code here due to confidentiality reasons.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply