Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««1234»»»

(Un)Common Speed Phreakery Expand / Collapse
Author
Message
Posted Sunday, August 08, 2010 9:36 AM


SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: 2 days ago @ 3:07 PM
Points: 2,382, Visits: 3,369
Jeff Moden (8/7/2010)
If you look at the masters of the trade such as Peter Larrson...
Hey!
I got that "Irish" or "Scottish" pronunciation again

//Peter Larsson (Peter, Son of Lars)



N 56°04'39.16"
E 12°55'05.25"
Post #965657
Posted Sunday, August 08, 2010 10:34 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Monday, April 07, 2014 6:42 AM
Points: 561, Visits: 2,414
Phil, can you elaborate on what are these differences?

It really depends on the role of the DBA in the team, I suppose. What I meant by this is that a DBA has, if he is supporting an existing system or advising a development team, generally got to be more constrained about changing the algorithm of the code. Whereas the DBA will get out the profiler, check the indexes, look at the statistics and look hard at the query plan in order to gain performance, the developer will, assuming he 'owns' the code that is giving problems, probably prefer to take a more fundamental look at the algorithms used and heave out code in a more draconian way in order to get performance and a scalable solution. Both approaches are necessary, of course and require different skills; I'm just saying that the roles require a slightly different approach, when faced by a process that is taking too long.



Best wishes,

Phil Factor
Simple Talk
Post #965664
Posted Sunday, August 08, 2010 10:44 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Sunday, January 09, 2011 11:09 AM
Points: 8, Visits: 44
swjohnson (8/8/2010)
we have a set of guidelines for duration, frequency of being called and number of records returned. Once the query or sp comes within those guidelines, then we go into code review and unit testing. While not perfect


Maybe not perfect, but far-and-away the best real-world answer so far, IMO.
Post #965666
Posted Sunday, August 08, 2010 11:30 AM


SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: 2 days ago @ 3:07 PM
Points: 2,382, Visits: 3,369
I always get suspicous when there are guidelines and when they are met, the developer is supposed to be happy and deliver.
I have come acress several customer with this approach and, let's put it this way: I wouldn't be there if things are doing well.

Case 1
One client had set the guidelines according to feeling and that didn't work when application went live and there was a mere 10,000 customers instead of the 100 in the test and development area. Their design goal was 1 million customers. For the 10,000 customers a simple "last known address" report took 3 minutes do deliver. For 1 million customers it would take 7 hours. My rewrite brought down the time to 80 ms for 10,000 customers and 300 ms for 1 million customers. The thing to learn here is to NOT do things by "feeling". The guidelines were ok on paper, but in reality they were not.

Case 2
This client designed everything from enduser perspective. Things were speedy, fast and within guidelines until there were 6 simultaneous users online in production. Noone consider testing application with more than 3 people concurrently. They had about 8,000 deadlocks per day when they called me in. After some rewrite and minor design changes, they haven't had a single deadlock for 3 years and they are now 400 simultaneous users. The thing to learn here is to test for concurrency and to add that to the guildelines.

Case 3
This is the most disturbing case. They had a 30 years old COBOL application that was rewritten to T-SQL. They had some minor issues with performance so I was called in and rewrote a couple of queries so they met their guidelines (set by the two senior developers with more than 2 times 20+ years of combined experience).
I wrote some new guidelines for them because I knew what they were heading. Of course they didn't listen since I fixed their current problems. 6 months later they called again, with panic, because one of their customers threathened to sue them for breach of SLA. The thing that happened was that the client moved all databases to a virtual environment without knowing the impact. They would know if they read my new guideline.
A simple search that took 45 minutes before VMWare took 10 hours after VMWare.
I did a huge job to rewrite queries and afterwards the query took 3 minutes in VMWare. The thing to learn here is that guidelines need to be updated and revisited often. Not only due to hardware changes but also to new additions to the T-SQL language (for SQL Server 2005 it was the windowed functions that did most of the improvements).



N 56°04'39.16"
E 12°55'05.25"
Post #965670
Posted Sunday, August 08, 2010 11:49 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: 2 days ago @ 8:08 PM
Points: 53, Visits: 278
The spectrum of development effort ranges from no attention to performance to the extreme Speed Phreak level. "Good enough" is almost never at either end of this spectrum. I think the minimum level of "good enough" is to follow Best Practices in schema design (3NF, constraints) and query tuning (sargable expressions, common-sense indexing) and perform at least a cusory review of execution plans for inefficiencies. I believe this minimal level is "good enough" as long as performance SLAs can be met without beefing up hardware.

I should add that performance SLAs alone aren't enough to determine "good enough". If my mission-critical web site crashes due to slow database response under a higher than expected load, then "good enough" wasn't. The "good enough" bar in development and/or hardware needs to allow for performance headroom. The question of how much headroom depends much on the critically of the application.

Additionally, raising the level of "good enough" will allow other applications to coexist on the same server without beefing up hardware or revisiting code. The cost of rework after initial development is always much higher.
Post #965673
Posted Sunday, August 08, 2010 1:48 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Monday, January 06, 2014 6:58 PM
Points: 219, Visits: 823
From the article:

Are we in danger of wasting time on optimising code when it is entirely unnecessary?


I completely agree, "premature optimization is the root of all evil" (Donald Knuth), and much time is wasted on unnecessary optimization.

Typically simple techniques such as index covering, splitting complex queries into simpler ones, and doing things in C#, deliver acceptable performance. In other words, usually 20% of effort give you 80% of potential results. Whether it is worth to invest more to get more performance, with quickly diminishing returns on your investment, is typically a business decision rather than a technical one.

Putting on my business hat, when we are deciding whether to optimize an OKish query further, we need to consider this:

1. it takes more time to understand more involved/more complex solutions, which ups my maintenance costs. Unfortunately, better tuned queries tend to be more complex.

2. more complex solutions break more frequently, which also ups my maintenance costs.

3. it takes a very smart person to utilize advanced query tuning techniques, and smart people are always in a very short supply.

4. In the long run it may be cheaper to upgrade hardware. If the requirements change and we change the query to accommodate that, it renders all the effort spent on tuning useless. On the other hand, hardware stays useful after the requirements have changed.
Post #965689
Posted Sunday, August 08, 2010 3:57 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 9:48 AM
Points: 35,950, Visits: 30,232
Alexander Kuznetsov (8/8/2010)
From the article:

Are we in danger of wasting time on optimising code when it is entirely unnecessary?


I completely agree, "premature optimization is the root of all evil" (Donald Knuth), and much time is wasted on unnecessary optimization.



I'm going to give you and Knuth the benefit of the doubt and say that your idea of "optimising" code is probably a whole lot different that mine especially with what you say in the next paragraph. I think that's why there's such a holy war about optimization... lot's of folks have the same idea but just don't know it.

Typically simple techniques such as index covering, splitting complex queries into simpler ones, and doing things in C#, deliver acceptable performance. In other words, usually 20% of effort give you 80% of potential results. Whether it is worth to invest more to get more performance, with quickly diminishing returns on your investment, is typically a business decision rather than a technical one.


Putting on my business hat, when we are deciding whether to optimize an OKish query further, we need to consider this:

1. it takes more time to understand more involved/more complex solutions, which ups my maintenance costs. Unfortunately, better tuned queries tend to be more complex.


Because of the same methods you just spoke of in your previous paragraph, I find this to patently NOT be true. Maybe it's a definition thing again... "better tuned queries" don't tend to be more complex. I find that the worst performing stored procedures (for example) typically use a single complex query with a large number of joins. Much faster queries have smaller simpler-to-understand queries that work together to achieve the same result.

2. more complex solutions break more frequently, which also ups my maintenance costs.


Yes they do... but that has nothing to do with "optimization". Again, maybe we're disagreeing on what each of us calls "optimization".

3. it takes a very smart person to utilize advanced query tuning techniques, and smart people are always in a very short supply.


I guess now is as good a time as any... what do YOU consider to be "advanced query tuning techniques"?

4. In the long run it may be cheaper to upgrade hardware. If the requirements change and we change the query to accommodate that, it renders all the effort spent on tuning useless. On the other hand, hardware stays useful after the requirements have changed.


Maybe but a lot of folks have been disappointed by such purchases in the past. One of the companies I worked for thought that would be the magic bullet for their performance problems. They upgraded from 4 CPU 1.8 GHZ box using SCSI drives, 4GB Ram, and SS Standard Edition to a fire breathing 8 cpu ~3GHz box using an EMC SAN, 16GB of Ram, and SS Enterprise Edition. Most of their 4 and 8 hour jobs ran in 1 to 4 hours for about a month... then it went right back to 4 to 8 hours. There were actually a couple of other jobs that went from bad to worse for reasons they never took the time to figure out.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

"Change is inevitable. Change for the better is not." -- 04 August 2013
(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #965696
Posted Sunday, August 08, 2010 7:25 PM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 7:26 AM
Points: 20,455, Visits: 14,067
Nice discussion already. I tend to lean towards the notion that queries should be optimized as best as possible prior to being rolled out. If there is a query in the system already that is causing performance problems then it needs to be optimized. If there are several queries that "look" like they need to be optimized, they can be slated for some other time. I will usually target known problems and code that is going out to production prior to looking at other pieces of code that will only give minimal improvement.

The exception is when there is a slowness problem being reported with certain functionality. Then as much of the code as possible that is related to this function needs to be evaluated. If you already have it open and it is under scrutiny - it is possible that it is a good candidate for a rewrite or optimization.




Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server


SQL RNNR

Posting Performance Based Questions - Gail Shaw
Posting Data Etiquette - Jeff Moden
Hidden RBAR - Jeff Moden
VLFs and the Tran Log - Kimberly Tripp
Post #965705
Posted Sunday, August 08, 2010 8:17 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Monday, January 06, 2014 6:58 PM
Points: 219, Visits: 823
Phil Factor (8/8/2010)
Phil, can you elaborate on what are these differences?

It really depends on the role of the DBA in the team, I suppose. What I meant by this is that a DBA has, if he is supporting an existing system or advising a development team, generally got to be more constrained about changing the algorithm of the code. Whereas the DBA will get out the profiler, check the indexes, look at the statistics and look hard at the query plan in order to gain performance, the developer will, assuming he 'owns' the code that is giving problems, probably prefer to take a more fundamental look at the algorithms used and heave out code in a more draconian way in order to get performance and a scalable solution. Both approaches are necessary, of course and require different skills; I'm just saying that the roles require a slightly different approach, when faced by a process that is taking too long.


Yeah, it looks quite similar from my perspective. Thanks!
Post #965708
Posted Sunday, August 08, 2010 8:21 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Monday, January 06, 2014 6:58 PM
Points: 219, Visits: 823
Peter Headland (8/8/2010)
swjohnson (8/8/2010)
we have a set of guidelines for duration, frequency of being called and number of records returned. Once the query or sp comes within those guidelines, then we go into code review and unit testing. While not perfect


Maybe not perfect, but far-and-away the best real-world answer so far, IMO.


I would do unit testing before any tuning. Otherwise we may be wasting our time optimizing the wrong query, and then we'll have to fix it and likely re-tune it.
Post #965709
« Prev Topic | Next Topic »

Add to briefcase ««1234»»»

Permissions Expand / Collapse