Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««123

Theoretical Performance Issue Handling Expand / Collapse
Author
Message
Posted Tuesday, August 25, 2009 8:24 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Saturday, October 25, 2014 2:40 AM
Points: 281, Visits: 84
wbrianwhite (8/25/2009)
Look at the execution plan, look at and think about what you are typing. Having complex queries will benefit from compiling and recompiling a stored procedure. This is because having an overview of a complex query is best described as impossible. Breaking it down in separate parts and reviewing their individual impact on the complete picture.


No offense, but I disagree. One large query that gets all your data at once is likely to be significantly faster than separate stored procedures that grab bits and pieces. The fundamental nature of a SQL query is that you saying "these are the results I want", and SQL Server then figures out the best way to give them to you. When you tell it 5 different things you don't let it optimize for what you really want.
Depends on what you are optimizing. If you are optimizing dev time, then yes, small building block procs are faster. If you are optimizing run time (like the theoretical example) then it is worth spending more dev time to get a faster running query.


Non taken
However I wasn't talking about breaking up 1 SP into many. The SP's I was talking about consist of a lot of seperate steps of data processing. Analysing these seperately is the only way to go. Writing simple selects should not be a problem if you call yourself a developer. I write the standard selects blindfolded with the 'best' execution plan. It are the complex procedures which cause the problems I talk about.



Post #776729
Posted Wednesday, August 26, 2009 1:40 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, October 1, 2013 3:40 PM
Points: 12, Visits: 111
R. C. van Dijk (8/25/2009)
wbrianwhite (8/25/2009)


No offense, but I disagree. One large query that gets all your data at once is likely to be significantly faster than separate stored procedures that grab bits and pieces. The fundamental nature of a SQL query is that you saying "these are the results I want", and SQL Server then figures out the best way to give them to you. When you tell it 5 different things you don't let it optimize for what you really want.
Depends on what you are optimizing. If you are optimizing dev time, then yes, small building block procs are faster. If you are optimizing run time (like the theoretical example) then it is worth spending more dev time to get a faster running query.


Non taken
However I wasn't talking about breaking up 1 SP into many. The SP's I was talking about consist of a lot of seperate steps of data processing. Analysing these seperately is the only way to go. Writing simple selects should not be a problem if you call yourself a developer. I write the standard selects blindfolded with the 'best' execution plan. It are the complex procedures which cause the problems I talk about.


Breaking a complex query into steps with intermediate temp tables usually results in more IO. I think the point is that very complex selects can cause unpredictable performance problems (or nearly impossible to predict), and the problems can appear at what appear to be random times. One of the reasons for the randomness can simply be that an execution plan created for one set of inputs can disastrous for a different set of inputs (it's frustrating to see a DBA "solve" one of these problems by recompiling a procedure), or conversely, an exec plan may be good until some event forces the system to create a new one and suddenly performance goes to heck. Other problems can be even more unpredictable, such as the number of processors available at the time of execution, or how many users are doing which things.

The more complex the query, the more possibilities for an exec plan; complex procs can be hard to analyze because they there is so much noise to filter out to find the problem, but a complex query can be hard simply because the query and its inputs are complex. Sometimes the best solution for one set of inputs is theoretically the worst solution for a different set, and you have to recognize that the best overall solution degrades performance under some circumstances.



Post #777814
Posted Wednesday, August 26, 2009 1:54 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Saturday, October 25, 2014 2:40 AM
Points: 281, Visits: 84
I totally agree. The complex queries I was talking about perform a lot of different actions, complex business logic is implemented in SP's. These kind of statements (seperate steps/parts of the SP) have inputs of which the domains are well defined. The individual parts of the resulting execution plan are stable. That is, until the described scenario happens. This will through everything of and a rethink of the rules, design etc is in place.
The fact that the inputs can mess up the execution plan, means that the domains of the input are not well defined.
If defining the domains is impossible, you are writing a system in which (parts of) the data is can not be defined upfront and you are probably better of rethinking the use of a relational database system (and a raise, because writing this is hell).
In a stable situation, the resulting plan should always be predictable and stable. If this was not the case, optimising would be useless.



Post #777824
Posted Wednesday, August 26, 2009 3:45 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, October 1, 2013 3:40 PM
Points: 12, Visits: 111
R. C. van Dijk (8/26/2009)
I totally agree. The complex queries I was talking about perform a lot of different actions, complex business logic is implemented in SP's. These kind of statements (seperate steps/parts of the SP) have inputs of which the domains are well defined. The individual parts of the resulting execution plan are stable. That is, until the described scenario happens. This will through everything of and a rethink of the rules, design etc is in place.
The fact that the inputs can mess up the execution plan, means that the domains of the input are not well defined.
If defining the domains is impossible, you are writing a system in which (parts of) the data is can not be defined upfront and you are probably better of rethinking the use of a relational database system (and a raise, because writing this is hell).
In a stable situation, the resulting plan should always be predictable and stable. If this was not the case, optimising would be useless.


This is actually incorrect. If you are dealing with very large databases, with many, many users, there can be a large variance in the size of the pie that users are interested in, and I have seen situations where, for example, a clustered index scan performed much better for a query returning a very large number of rows while a nonclustered index lookup performed better for a user concerned with a much smaller set of data. It has nothing to do with the definition of domains, which are well understood. It seemed to me by your own admission that the exec plans were not stable, because they became suboptimal as the amount of data increased. This is not really that different from what I was talking about, except that both conditions exist simultaneously: the exec plan to deal with a small amount of data is not the same as the one to deal with a large amount of data from the same table. Let's define inputs as parameters supplied to sp_executesql, for example, because that's a clear situation where inputs can change without causing a new execution plan to be generated. So, optimizing can be useless in the conventional sense, because there is no optimal. Well, there is an optimal, but it cannot be perfect.



Post #777912
Posted Thursday, October 29, 2009 4:50 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Today @ 7:54 AM
Points: 7,814, Visits: 9,562
R. C. van Dijk (8/24/2009)
The approach that the customer is always right, is totally wrong. Make them think they are right whilst feeding them with your own ideas is the way to go. Programmers (together with consultants) know best. Make them think your ideas are their own and yes, make them sign of on them!

I thought the article was very good until I came across
As a project manager or even a developer, you know better than the end user. They don't think so, but most of the time you have a better overview of the system and clients work to be done.

and then I began to wonder. Then the above remark compounded things.

Most DBAs, project managers, programmers, consultants can't be polymaths (except maybe for the odd genius here and there). So unless the applications happen to be in only one area and you've learnt that area thoroughly, you probably don't know what's required better than the end user - you probably know and understand a lot less than he does. Of course if the person setting the requirement is not an end user but a "management consultant" or an "efficiency consultant" then you possibly do know far more than him/her, because he is pontificating about a field (the application domain) that is not his/her claimed area of expertise (management, or efficiency). In the IT industry such consultants have a terrible reputation - and you are advocating precisely the sort or arrogant and incompetent attitude that is largely responsible for that reputation.

Yes, sometimes you can see that a stated requirement could have dire consequences - and then you have to work with the end user to find whether it's a real requirement (in which case you have to give hime the bad news and see where things go from there) or a badly expressed requirement: but when your real end user (a lawyer, perhaps, or an organic chemist) says something like "I have to demonstrate that this set of information was shown simultaneously on a single screen for verification in order to comply with Amcedonia's such and such regulation" or "I need to see these characteristics of the molecule together to determine that it will be safe in contact with ethylene" you had better not pretend that you know better.



Tom
Post #810629
« Prev Topic | Next Topic »

Add to briefcase «««123

Permissions Expand / Collapse