Performance Tuning Stored Procedures

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/sjones/performancetuningstoredprocedures.asp

  • Great article!

    Why not change your PRINT statement to:

    PRINT 'Select 1 - '+CONVERT(varchar(23),GETDATE(),121)

    This will output:

    Select 1 - 2006-07-19 21:24:09.230

    Which should give you better timing diagnostics.

    Andy

  • Thanks Steve, some useful stuff there.

    Just one small note, in Profiler you don't have to create a new trace to add events. If you pause or stop your trace, you can then choose FILE | PROPERTIES, then the EVENTS tab, and edit your current trace. Then just start the trace again.

  • Hi,

    Using getdate will give you the elapsed time wich is not reliable : if other process are running, this will slow yours.

    use instead the cpu time.


    SET

    @previous_cputime = @@CPU_BUSY * CAST(@@TIMETICKS AS FLOAT)

    --do some stuff

    SET

    @current_cputime = @@CPU_BUSY * CAST(@@TIMETICKS AS FLOAT)

    -- this will print the cpu time in seconds

    PRINT

    (@current_cputime - @previous_cputime ) /1000000.0


    Regards

    Jacques

     

  • What I Like in Steve’s articles is, their simplicity and no beating around bushes. I hope all the SQL SERVER guys know the first method using getdate().

     

    I didn’t know what events to select while profiling a SP. Good article!

  • This is quite possibly one of the worst performance tuning articles I have ever seen. What about using the set statistics options? What about looking at IO Reads/Writes?

    Very convenient how your 'optimisation' was to remove an erroneous WAITFOR statement in the proc. If only the real world were that simple.

  • Hmmm, that's very harsh! Tuning is an art which takes some time to learn as is very obvious from the postings in the various forums. It's all so much easier when you've been doing it for many years - I remember my first brushes with query plans and profiler ( well some of this wasn't quite so easy in 6.0 )

    I have to admit that I rarely use the set statistics ( time ) option as I find the output inconsistent and confusing, I always use profiler as Steve suggests. Tuning for i/o is another path if you like, although you might want to view i/o and time together I agree. But how much can you put into an article without it becoming too confusing?

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Thanks for the comments.

    This is a basic article on some easy ways to get started. There are lots of other ways to dig into individual problems and the SET comments give you other flexibility. I'll look into those in another article.

  • People can and do write whole books about performance tuning in SQL Server (I know, I've bought a few). I think it's unrealistic to assume that a single article can provide more than a start with a few specific examples. While the WAITFOR example probably could have been better, overall it's a good article to get people started and give them a few techniques.

    I work with a lot of vendor products from major name companies and you wouldn't believe how much "enterprise" software goes out which could learn a lot from a few of these simple techniques.

  • I think Oli misunderstood the purpose of #1.  I believe it was Steve's intention to use it to identify whether or not the proc was the one that was running slowly, not exactly what its performance problem was...  I say that because, from what I know of Steve, I'm pretty sure he understands that Time is not a system resource.    You would never say "the server ran out of time, that's why it's running slow!"

    I also agree with whoever stated that Set IO statistics ON seems to give unreliable data... but then, I could be using it or interpreting the results incorrectly, so I'm not going to say it isn't a valid option. 

    I've learned that a SAN can hide a multitude of bad SQL.  My performance conditions used to be "anything that takes longer than 5 seconds".  Now, I look at two events:  SP:stmtcompleted and SQL:stmtcompleted and limit it to anything that does reads > 2000 or duration > 100ms.  You get a bit of sensory overload, but I can cite numerous examples where my problem was something that finished in 500ms but ran for thousands of iterations.  I agree with Oli that I/O is going to be your problem about 95% of the time.

    One question I have is this:  the DBA team limits development staff to db_owner in DEV and no rights to TEST databases.  We try like crazy to keep sa limited to only the DBAs.  Is there any way to allow a developer to run Profiler without granting sa to them or giving them access to the procs in master? 

  • This is a great article Steve.  Very simple as has been stated, but people should know how to apply a simple tutorial to a complex environment.

    I always knew profiler was one of the better troubleshooting tools available, but it is not often people write basic articles teaching how to use its better features.

    Thanks, I am definitely hanging onto this article until I become a pro with profiler.

  • Jacques,

    I'm not sure how your example helps because the cpu time you produce is for the entire server, not an individual spid... as you said about simple elapsed time, if someone else is doing something, their time will also be included in your calculations.

    Here's what I use... 

    SELECT @StartCPU = SUM(CPU)

       FROM Master.dbo.SysProcesses

      WHERE SPID = @@SPID

      GROUP BY Spid

    ...do something...

    SELECT @EndCPU = SUM(CPU)

       FROM Master.dbo.SysProcesses

      WHERE SPID = @@SPID

      GROUP BY Spid

    ...display the delta as you will... you can also do the same with disk i/o.  See "SysProcesses" in Books OnLine.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Using any CPU metric as a gross indicator of performance is dodgy.  Performance, surely, should be measured by how long the operation takes to complete (i.e. elapsed time) not how much time was spent in CPU.

    What happens if the major problem is I/O?  CPU time might look fine, elapsed time poor.

    CPU time is useful to try to work out what part of the overall operation was poorly performing, but only in conjunction with other measures (I/O stats, elapsed time etc).

  • Absolutely concur!  I was just demonstrating one method of getting the CPU time expended by a single spid.  I usually get the I/O the same way as well as measuring total duration doing a simple date diff between start and end times of each section of code as well as the overall duration of a proc.  And, I write them to a table so I can check the progress while it's running.

    Steve... great lead-in to tuning and I'm kinda glad you didn't clog up the works with setting statistics time to ON, etc.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I'm probably missing something here, but for @@CPU_BUSY BOL states that it "Returns the time in milliseconds ... that the CPU has spent working since Microsoft® SQL Server™ was last started."  So, what has @@TIMETICKS to do with this?  ("Returns the number of microseconds per tick.")  For seconds "working" shouldn't we only need "SELECT (@current_cputime - @previous_cputime ) /1000.0"?

    If @@CPU_BUSY returned the number of 'ticks' I could understand.

    As an aside, I put both Jacques' method and Jeff's "select sum(CPU) from sysprocesses" method through a wrangler with a "waitfor delay '00:00:20'".  The sum(CPU) consistently returns "0" whilst the @@CPU_BUSY always returns a value > 0 (ie: 0.125 or 0.5).  Presumably this is because the @@ functions are doing things "beneath the hood" or is it do to with how sysprocesses is handled?  (This is an 'efficiency' question here - I haven't had that much to do with CPU metrics but it looks like it might be useful.  Somewhere.  Somehow.  :rolleyes

    S.

     

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply