Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Base-Lining info to compare before and after SPs Performance Tuning modifications ? Expand / Collapse
Author
Message
Posted Tuesday, January 7, 2014 8:55 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Wednesday, August 20, 2014 11:17 AM
Points: 39, Visits: 93

It appears that the main challenge I am looking forward to now is BASELINING.
As i am trying to figure out if there is ANYTHING else at all in addition to Profiler Trace data that would be valuable to save now, and then
read the same info AFTER I apply certain performance tuning modifications to compare the NEW metrics to the OLD ones ?

Does anyone know what else would be critical to include in currrent baseline metrics?
I assume that saving/comparing to the data from sys.dm_exec_query_stats or other DMVs is no good because it is cumulative and is cleared only at restart of server. And it is not planned to be restarted.
My main purpose is improving performance of 30 to 40 Stored Procedures, for which I have created additional indexes and modified some TSQL in some of the SPs.
And I need to be able to clearly show in my report to Management the POSITIVE difference that my tuning modifications have made. I would appreciate any useful practical advice. Thanks.



Voldemar
likes to play chess
Post #1528531
Posted Tuesday, January 7, 2014 9:04 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: 2 days ago @ 4:29 PM
Points: 588, Visits: 900
What is the current issue with the stored procedures that you are trying to resolve? Duration? Reads? Writes? Is there a specific wait type that is an issue that you are trying to resolve i.e. CXPACKET. We will need more information on what you are trying to resolve before we can simply give you all of the metrics to look into. I would suggest that you get Grant Fritchey's book "SQL Server 2012 Query Performance Tuning". In there he walks you through how to identity problem queries (using different metrics) and steps to fix those issues.



Microsoft Certified Master - SQL Server 2008
Follow me on twitter: @keith_tate

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1528536
Posted Tuesday, January 7, 2014 9:30 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: 2 days ago @ 7:44 AM
Points: 12,910, Visits: 32,015
It seems like most of the time i'm tuning, i have a specific query plan , either from a query, a procedure, or from the cache when i checking top 20 slowest;

I save the actual query plan as well as the suggested script together, so i can show that the suggested change*(new index, code modification, whatever) was due to results found in a query plan, and then i can show the current plan on demand, to compare differences.


Lowell

--There is no spoon, and there's no default ORDER BY in sql server either.
Actually, Common Sense is so rare, it should be considered a Superpower. --my son
Post #1528552
Posted Thursday, January 9, 2014 11:24 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Wednesday, August 27, 2014 9:54 AM
Points: 865, Visits: 2,381
VoldemarG (1/7/2014)

...for which I have created additional indexes...


Changing the code in your SP's? Reads, Writes, CPU, and Duration from a trace/profiler session, preferably over multiple runs, the first of which happens after clearing out caches and so on. If you want to look at global impact, overall perfmon counters - if you affect those, tuning was badly needed, but hopefully your normal workload variance is a much bigger source of difference than tuning will be, so you wouldn't see a statistically significant change.

Creating new indexes? If you're at a level where you need to show hard evidence that your chances are a net benefit, then you've got a much tougher row to hoe. In that case, you may need to demonstrate that, with whatever your company's business model weighting is, your new indexes have a net positive effect - i.e. the weighted cost of adding them, keeping them updated with every DML statement, maintaining them (REBUILD/REORGANIZE), storing them, backing them up, and so on and so forth is less than the weighted cost/benefit they have in every statement (not just the ones you're hoping to make better).
Post #1529478
Posted Sunday, January 12, 2014 5:34 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Wednesday, August 20, 2014 11:17 AM
Points: 39, Visits: 93
Yes. I/o, Memory, CPU intensive SPs. Identified by Profiler trace top 50 most often executed SPs with longest duration, and I started to optimize them, which included not only TSQL modifications here and there but also creating a few dozen of indexes thanks to analysis of Query Plans. So what I need to do is to show management that my optimization changes actually have made a difference. So I am just asking if there is something good to rely on in addition to Profiler Traces and there analysis (SQL 2008). It is my understanding that info provided via for example sys.dm_qry_stats DMV is not really reliable (being cumulative, dependent on reboot, etc.). So I was wondering if someone possibly used other metrics than profiler traces to show "before" and "after rollout of optimizing scripts.


Voldemar
likes to play chess
Post #1530164
Posted Monday, January 13, 2014 8:41 PM


SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Tuesday, August 26, 2014 12:15 PM
Points: 80, Visits: 252
Before and after estimated execution plans with the stats could be helpful for showing immediate benefits (Grant Fitchey has a kick *** book on execution plans...http://www.red-gate.com/community/books/sql-server-execution-plans-ed-2)

Overall impact on the system might be a little harder to present without adequate collection of performance data. How often have you been collecting performance data?


Why is it that people who can't take advice always insist on giving it? - James Bond, Casino Royale
Post #1530523
Posted Tuesday, January 14, 2014 8:13 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Wednesday, August 27, 2014 9:54 AM
Points: 865, Visits: 2,381
If you really want to baseline things and isolate the changes made, ideally, you might do something like record a few segments (hours worth, including all aspects - input, output, change, daily activities, weekly activities, maintenance, etc.) of your normal workload, and then replay it on a freshly rebooted, completely isolated physical instance that was (pre-reboot) restored to a point in time just before the segment was recorded.

Record the various counters (sys.dm_os_performance_counters, sys.dm_io_virtual_file_stats, perhaps even sys.dm_db_index_usage_stats or a Perfmon trace to get Disk Avg Sec/Read and Disk Avg Sec/write) over a few restore, reboot, replay cycles. Make your changes, then do another few cycles, and look at the overall changes.



Post #1530707
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse