SQLServerCentral Article

Static Code Analysis: a necessary irritation.

,

There is something spooky about an application that routinely criticises your code. It is bad enough for the blood-pressure when youngsters do it. Worse, an application has more authority in the way it dismisses poor coding practices than a human. Some managers seem to be in awe of static code analysis.

When the C language emerged, it was unusual in that it allowed some ridiculous errors, compiling such code without even a warning.  You could, for example, read a variable before it had been set, have incompatible variables in an expression, do a division by zero, call functions with incorrect parameters, or assign values to variables that were outside the range.  Most language compilers picked up this sort of error, but C had a separate program, called Lint, to do this.  From this small beginning came the idea of  static code analysis.

There are some advantages to separating the checking of code from the compiling of  it.  You can, depending on the tool you use, go beyond the mere identification of coding errors to gauge the extent of technical debt, complexity, architecture, interface analyses, duplication, quality of design, extent of comments, and coding standards. You can run plagiarism checks, dead code analysis, style checking, security violations, and detect the use of open-source in contravention of the license.

It is rare to see extensive static code analysis being done methodically though it features heavily in techniques for continuous delivery.  It has some very obvious conveniences, since it can provide the big picture of the likely issues that could be faced during a deployment, particularly the legal, security, copyright and regulatory ones. For an application of any size, though, it doesn’t replace a manual audit of code, so its main advantages would be to alert the necessary IT specialists to take a closer look.

There are dangers too. It is human nature to rely on metrics more than professional judgements. Where it is easy to get figures on a software metric, such as with cyclomatic complexity,  it is tempting for managers to rely too heavily on it as a measure of technical debt, and for developers to respond by evolving  a coding style that gets the right scores. However, if you choose instead to  look at a broad range of ‘code smells’, you are much more likely to gain a realistic impression of code-quality.

Tools for code analysis are, I believe, best used by developers before any deployment process, as part of the normal process.  It can, in fact, be difficult to spot, and deal with, structural problems in code without them.  Sure, it is a good second-line defence to include it in the deployment process but if things first come to notice at that stage, it could cause delays.

There is little doubt that static code analysis can contribute to code quality and deliverability.  As an aid to a developer, it seems increasingly essential, but can it ever deliver reliable metrics of code-quality? One shudders at the potential misuse of quality metrics in the wrong hands.  My hope is that it remains just an aid to human judgement; and creativity.

Rate

5 (1)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (1)

You rated this post out of 5. Change rating