Base-Lining info to compare before and after SPs Performance Tuning modifications ?

  • It appears that the main challenge I am looking forward to now is BASELINING.

    As i am trying to figure out if there is ANYTHING else at all in addition to Profiler Trace data that would be valuable to save now, and then

    read the same info AFTER I apply certain performance tuning modifications to compare the NEW metrics to the OLD ones ?

    Does anyone know what else would be critical to include in currrent baseline metrics?

    I assume that saving/comparing to the data from sys.dm_exec_query_stats or other DMVs is no good because it is cumulative and is cleared only at restart of server. And it is not planned to be restarted.

    My main purpose is improving performance of 30 to 40 Stored Procedures, for which I have created additional indexes and modified some TSQL in some of the SPs.

    And I need to be able to clearly show in my report to Management the POSITIVE difference that my tuning modifications have made. I would appreciate any useful practical advice. Thanks.

    Likes to play Chess

  • What is the current issue with the stored procedures that you are trying to resolve? Duration? Reads? Writes? Is there a specific wait type that is an issue that you are trying to resolve i.e. CXPACKET. We will need more information on what you are trying to resolve before we can simply give you all of the metrics to look into. I would suggest that you get Grant Fritchey's book "SQL Server 2012 Query Performance Tuning". In there he walks you through how to identity problem queries (using different metrics) and steps to fix those issues.



    Microsoft Certified Master - SQL Server 2008
    Follow me on twitter: @keith_tate

    Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • It seems like most of the time i'm tuning, i have a specific query plan , either from a query, a procedure, or from the cache when i checking top 20 slowest;

    I save the actual query plan as well as the suggested script together, so i can show that the suggested change*(new index, code modification, whatever) was due to results found in a query plan, and then i can show the current plan on demand, to compare differences.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • VoldemarG (1/7/2014)


    ...for which I have created additional indexes...

    Changing the code in your SP's? Reads, Writes, CPU, and Duration from a trace/profiler session, preferably over multiple runs, the first of which happens after clearing out caches and so on. If you want to look at global impact, overall perfmon counters - if you affect those, tuning was badly needed, but hopefully your normal workload variance is a much bigger source of difference than tuning will be, so you wouldn't see a statistically significant change.

    Creating new indexes? If you're at a level where you need to show hard evidence that your chances are a net benefit, then you've got a much tougher row to hoe. In that case, you may need to demonstrate that, with whatever your company's business model weighting is, your new indexes have a net positive effect - i.e. the weighted cost of adding them, keeping them updated with every DML statement, maintaining them (REBUILD/REORGANIZE), storing them, backing them up, and so on and so forth is less than the weighted cost/benefit they have in every statement (not just the ones you're hoping to make better).

  • Yes. I/o, Memory, CPU intensive SPs. Identified by Profiler trace top 50 most often executed SPs with longest duration, and I started to optimize them, which included not only TSQL modifications here and there but also creating a few dozen of indexes thanks to analysis of Query Plans. So what I need to do is to show management that my optimization changes actually have made a difference. So I am just asking if there is something good to rely on in addition to Profiler Traces and there analysis (SQL 2008). It is my understanding that info provided via for example sys.dm_qry_stats DMV is not really reliable (being cumulative, dependent on reboot, etc.). So I was wondering if someone possibly used other metrics than profiler traces to show "before" and "after rollout of optimizing scripts.

    Likes to play Chess

  • Before and after estimated execution plans with the stats could be helpful for showing immediate benefits (Grant Fitchey has a kick *** book on execution plans...http://www.red-gate.com/community/books/sql-server-execution-plans-ed-2)

    Overall impact on the system might be a little harder to present without adequate collection of performance data. How often have you been collecting performance data?

    Why is it that people who can't take advice always insist on giving it? - James Bond, Casino Royale

  • If you really want to baseline things and isolate the changes made, ideally, you might do something like record a few segments (hours worth, including all aspects - input, output, change, daily activities, weekly activities, maintenance, etc.) of your normal workload, and then replay it on a freshly rebooted, completely isolated physical instance that was (pre-reboot) restored to a point in time just before the segment was recorded.

    Record the various counters (sys.dm_os_performance_counters, sys.dm_io_virtual_file_stats, perhaps even sys.dm_db_index_usage_stats or a Perfmon trace to get Disk Avg Sec/Read and Disk Avg Sec/write) over a few restore, reboot, replay cycles. Make your changes, then do another few cycles, and look at the overall changes.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply