SQLServerCentral Article

Theoretical Performance Issue Handling

,

Theoretical performance issue handling or handling theoretical performance issues

Since the beginning of my career, I've been exposed to databases with every job and every project. I've build simple data entry systems, 3-tier apps, complex data handling and any such system and found that even in the simplest of systems, performance is a big issue. Having the user wait for more than 4 seconds for his action to complete is a big no-no.

I've seen database-guru's come and go, seen experts crush their teeth trying to resolve performance issues and found that most of the times, the approach is completely wrong. I'm not going to start a complete article series describing performance enhancements, best practices or handy pieces of code, but I'll take a more theoretical approach to any and all issues. Hence the title, theoretical performance issue handling or handling theoretical performance issues.

I'll try to explain the title since it describes partly where a lot of today's programmers go wrong. You probably say, how can you tackle a performance issue theoretically?? Well, good question I would say in return. Let me describe a common and widely known scenario.

A client reports a performance drag in a screen which has been in your application for years and never changed since initial implementation. A quick list below describes the reaction and actions taken by the techie assigned to the issue:

  • First off, a lot of swearing. How could it be, it has worked flawless for years
  • Who changed the freaking database schema, keys or screwed with the code
  • Surprise that none of the above happened to the area affected by the issue
  • Code is analyzed and the called stored procedures listed in notepad (or ultraedit for the more advanced programmers)
  • Debug lines spewing out execution times are added to the code to find the offending database call
  • No performance issues are noticed, call is closed as being a one-off and not-reproducible
  • Team-lead reopens issue and lets the programmer know, it happens more than once and the client is getting ticked of
  • To get more accurate execution times, the client's database is retrieved if possible
  • The application is rerun and low and behold, the issue is reproducible
  • The programmer dives into the offending procedures and starts shifting joins, creating temp tables, changing inner into left joins and spending lots of time running the procedure over and over again.
  • Changing the last sub-select to a temp table seems to do the job
  • Change is tested and delivered
  • Client is happy

Now, you probably say what's wrong with that? Client is happy and besides the arrogant attitude of the programmer closing the issue as not reproducible it all turned out good. Well, you would be correct in stating this, was it not for the client calling back after 3 days stating the issue has come back to hunt them.

The question to ask here is: What went wrong here? How come the second issue call is made? Here the story of the Theoretical Performance Issue Handling comes into play.

Theoretical Performance Issue Handling

Handing over the problem to a techie right away might not be the best approach. I'll explain the theoretical way.
Why did the issue come up? In systems using a database, there are always performance issues, but these get solved over a period of a few months after the system is implemented. What happened, when no change has been made to the system which can cause the drag, when no update has been delivered?

This is the fundamental question to ask. Database server changes are ruled out here, as I assume that the first-line support and the IT department at the client's location did the initial analyses of the problem and ruled these out. Now, the theoretical part comes into play. What can change in a live system? There is actually only one option, data volume.

It might be that a lot of data got entered e.g. due to the beginning of a new fiscal year, or a threshold got reach, a turn a round point. Both events point to the same results; the statistics force a new execution plan. This thought hardly ever enters a mind when a problem of this kind gets reported. A solution that was perfectly OK when the system got designed, might not be the way to go after a period of use. Of course this is due to lack of design at the beginning of the project, but this is the normal way systems get designed and build.

Now the fun starts. Why is there so much data going into or coming out of a procedure? Having more than 200 lines of data on a screen might seem a good idea, but is it? Is it needed, does it offer the user what he/she needs? The way to deal with these questions is to step back, don't look at the code or even the problem. Take a look what the screen should do, why it should do that and which data is needed for the user to do his/her work.

When you take this step back, small blocks of isolated information form in your head, these blocks in turn form entities of data that have functional or technical links. This is actually all you need. This procedural approach to a solution is the way to go. All these blocks and links actually form the tables, selects and joins in the stored procedure. In short, taking a step back and trying to form an idea about what the goal is of the screen, is the basis of your solution. Technical advances of the system might mean that there are shortcuts to what you want to accomplish and the original needs of the user have become obsolete by implementation of other functionality.

Now for the second part of the title, handling theoretical performance issues.

Handling theoretical performance issues

This actually can from a nice discussion I had not too long a go. When does a performance issue become a performance issue? When a developer feels it could be faster, when a DBA thinks the costs are too high of a stored procedure or when the client complains? Actually my opinion is that even if all the above are OK, but the procedure eats up more resources (CPU, time or any) than functionally needed, there is a problem there. This part actually precedes the previous part because tackling this correctly will most of the times prevent performance issues later on.

Take a look at the needs of the client; don't be afraid of being critical when looking at those needs. Is it really needed? As a project manager or even a developer, you know better than the end user. They don't think so, but most of the time you have a better overview of the system and clients work to be done. Only give the user data he/she needs, don't try to be smarter than what's good for the system. Design and write stored procedures that are lean and mean. Don't use excessive amounts of data; use all available filters even if the queried table only contains a few lines. Don't always use indexes, for small tables table scans are faster and indexes are more labor intensive. Use common sense, think before you act. Doing this, especially the last before you start typing, the end result will be fast and productive.

To compact this second part down, think of this. A procedure eating up just a few more percent resources than needed is a time bomb. And these things always explode when you can least handle the extra workload, so taking the extra time upfront will save you a lot of time at the end.

Conclusion

Keep away from your keyboard as long as possible. Design, write and solve all inside your head first. Only then start working at what you are good at.

Rate

2.98 (52)

You rated this post out of 5. Change rating

Share

Share

Rate

2.98 (52)

You rated this post out of 5. Change rating