What's Your Code Quality?

  • I saw this Peter Coffee editorial on code quality, specifically Java development managers. I think Coffee and eWeek in general has a bias against Microsoft, so I'm not surprised they're talking to the Java guys, but I don't think the results from .NET development managers would be much different.

    The editorial talks about most managers not measuring quality or if they do, not starting until the project is over half done. That's interesting because in the times that I've done software development, we were almost always concerned about timelines, and things were usually graded in one of three ways. Works, doesn't, or needs more work. And most things passed through all 3 of these phases during a development project.

    But interestingly enough, I've never had quality measured as a DBA. All the T-SQL work either does what it is supposed to do or it doesn't. And if it doesn't, we work some more on it 🙂

    There's never been any measure of code quality, and I'm not sure how I'd go about doing it. It seems from the article (I didn't read all of the linked report), that they looked at bugs reported v lines of code. I'm not sure that's the best measurement since I could write some code that works, but is very slow to execute. Or that has hard-coded information that makes maintenance a nightmare.

    I'm not a software expert, especially with regard to quality. To me it either works as I expect it to and well enough or it doesn't. I kind of use that thumbnail estimation in that a particular item, stored procedure, function, etc. either returns the results it should or it doesn't, and it works in an acceptable time frame, or it doesn't. Comparing a method call to calculate interest on a line to a stored procedure that produces a sales by month result is hard, and I'm not sure I could setup concrete ways to do it.

    But I know people are trying. Like the company that sponsored the survey. So I'm wondering, do any of you measure quality? Know of a good way to do it?

    Steve Jones

  • We use validation scripts that set up test data, run the methods, check the results and output the information to a table, then query the results table. It is useful as it allows retesting very quickly (simply rerun the script) and the test data can be set in such a way as to test the extremes (biggest expected value, biggest expected value + 1, etc)

    We use test driven development for our VB coding which has improved our code quality and we try to do the same with T-SQL. It may seem long winded at the time but it saves a lot of effort later on, especially when you are trying to fix something four months down the line at 4am.

    The only limitations we have found is our imagination. We can only test what we think of and therefore tests can be missed if you forget that a particular set of circumstances can arise.

  • Passing testing is not necessarily a reflection of good quality code though.  So, what exactly is meant by quality code?  For example, code may pass all the tests, but come maintenance time when the original developer has left...

    A company I worked for required that software went through a standards test to ensure that it conformed, e.g. only variables in a certain range were used for certain operations, return codes were standard, GUI elements conformed to house style etc.

    Even then, there was always a developer's style to wade through during maintenance, but at least with standards in place you had a good chance of success.

    For me, if the code performs as expected (functionality and performance wise) and the maintenance of that code is not (too) problematic, then that's 'quality' enough for me.

  • Ummm, dude, writing TSQL (or PL/SQL or just plain SQL for that matter) is coding. Period. Same as writing Java, C#, C++, whatever.

    Also, there are 4 basic metrics of sw development: 1) working by a date 2) how easy for someone else to take over 3) how flexible is it when the inevitable new requirements come in and 4) how scalable is it.

  • Quest Software offers a new tool - TOAD for SQL Server - which is still in its infacny but will some day soon offer exactly what you're looking for - because in the Oracle world, TOAD already offers a utility called "code expert" - which reviews SQL scripts and PL/SQL code, and returns a CRUD matrix plus the following Software Engineering Institute (SEI) ackowledged metrics: Halstead Complexity Mesaure, McCabe's Cyclomatic Complexity, and a Maintainability Index. You can read about these at the following urls:

    http://www.sei.cmu.edu/str/descriptions/halstead.html

    http://www.sei.cmu.edu/str/descriptions/cyclomatic.html

    http://www.sei.cmu.edu/str/descriptions/mitmpm.html#78991

    Furthermore, TOAD offers the ability to indicate which lines of code pose challenges in the following five core coding areas: readability, maintainability, efficiency, program structure, and code correctness.

    Quest plans to add similar features to the SQL Server offering, so "code expert" should debut sometime hopefully late this year.

  • Defining code quality is never simple.

    I've worked for clients who follow strict guidelines for quality, implementing a series of required variable forms, structures of choice, etc. The end result is overly complex code because creativity is drained off by the requirements of maintaining what they call quality, and often the code is brittle as a result of constraints. Yet with weak development teams this works.

    I've also worked for clients who have one quality measure -- does it work? This is usually combined in a management process that asks one question -- was it produced on time? The end result is code that is hard to maintain, but often does work to requirements. The only way I have ever seen this work effectively is with deeply talented development teams, where the artisans overcome the risks of creative development.

    Grasshopper posted some good metrics, but I would add one to that list and say that at the core all software development addresses the following quality concerns:

    1) Timeliness;

    2) Maintainability;

    3) Flexibility;

    4) Scalability; and

    5) Reliability.

    Anyone who has ever coded knows that the order of these concerns changes for the project, purpose and client, but without considering at least these five core elements you frequently face project collapse at some point either before delivery or after.

    A trend I've noticed is that many enterprises have started blaming software failures on quality issues when the reality is the failures are traced to requirements analysis failure. It has made implementing quality harder than ever, because even with high production quality the failure of software is inevitable when the requirements process has failed.

    The biggest danger with quality, it seems, is that the definition is so variable...because the old rule applies: you can't measure what you can't reliably define.

  • My biggest bitch about quality measurement tracks Steve's initial observation that quality measurement is most frequently begun waaay too late in the game.  It's trivial for QA to "find" zillions of bugs in software based on requirements and documentation that's open to interpretation -- but nobody ever considers grading the BA's requirements on the way in or the quality of the QA's defect reports that can be downright non-specific and impossible to fathom.

    This lack of quality from the people (BAs, QAs and PMs) who want to measure developers' telepathic and clairvoyant abilities is particularly egregious when the people doing the initial measurement are the root cause: they write utter Special High Intensity Technology requirements that are too unclear to be meaningful and then they get to determine whether developers delivered to the acronym for their requirements or not.

    But I'm not bitter...

  • I sure can't tell you how to measure quality. We do try to add quality as a requirement in the code we right. We're following the test driven development methodology for our TSQL as well as the C# & VB. It doesn't provide a quality measurement per se, but it does add to the overall quality of the code. Further, we run the SQL Server Best Practices Analyzer which catches a bunch of low hanging fruit in terms of quality (schema ownership on all referenced objects, proper cursor declarations, use of reserved words, etc.). We also have a published document of TSQL best practices that we review regularly within the DBA team and with the development teams. We do regular code walk-through's & reviews on TSQL. All of this improves the quality of the TSQL (as well as the knowledge and skill sets of the DBA team), but none of it is necessarily quantifiable in a way that we've been able to come up with.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I am sure that there isn't much more that I can add to this that would be new and insightful, but since I am a software developer, I guess I will try.

    I have so far worked for three companies where I had duties that included developing software. Different than the other developers who posted here, each place was small and I was either "THE SOFTWARE TEAM" (IE I was the only developer) or I was part of a small group (no more than 3, mostly two ) of developers. I have found that though this approach (having really small development teams) means that there is more stress on the individual developer, that there is more chance that the code written will A) work and B) be consistent. Although I am currently working as part of an open source initiative with a large group of developers, I have historically been in small groups. For me, this seems to work better. In any case, code quality is subjective. It is not just the code, it is how the code looks, how well its documented, how closly it conforms to coding standards of the given language, etc. I am afraid that this will always be left up to a human being to interpret (IE no machine without a high level of intelligence could possibly judge software anymore than a machine can judge music). There is one part that no one has yet brought up, but code is often art. It is just as much art as a piece of writing, a painting, or a sculpture. As such, different people will like different code (that is why there are so many different operating systems and programming languages). What I see as quality code may be junk code to another developer and vice versa.

    In any case, I think that we as developers have to make sure that we keep code well documented, try to follow a coding standard (or at the very least be consistent in how we do things) and test the code to death (via a testing suite AND by using the product ourselves in the way that the users will be using it). In that way we can minimize the ammount of future bugs (it should be noted that even though a developer and a QA person may sign off on a piece of code, it will likly have bugs since the users will find new and inventive ways of breaking the software).

    As a last comment, I think that the two magazines that steve brought up are no more biased against microsoft than sqlservercentral.com is biased to microsoft. I don't see an awful lot of information on this site about oracle, mysql, sybase, db2, etc. Additionally, it seems that anytime there is a comment about anything non-microsoft on this site it is usually negative. Just an observation.

    Aleksei


    A failure to plan on your part does not constitute an emergency on my part!

  • Metrics are valuable but only if they are an insight not the ultimate criteria. Placing too much emphasis on any metric will 1) limit creativity that was not anticipated by the metric and 2) serve to distort code writing to boost the metric score.

     

    ...

    -- FORTRAN manual for Xerox Computers --

  • Although I'm still new at SQL Server Administration, I worked in a quality dept. for a global company for 10 years.  What I've learned is this; Quality is defined by the customer.  I've read all of the posts and the replies from folks who responded hit the mark very well.  Quality must be defined.  It must be defined prior to work being started and it must become a part of the process at the begining, not "measured in" at the end.

    You can think about a watch, radio, car or anything that is manufactured.  When code is created (manufactured) there are understood elements that should follow a understood practice.  Measurements must be taken to ensure these practices are followed.

    Your customer might be your boss or your client.  The quality measuement may be the application, how well it functions, the speed with which processing is performed and or the appearance.  The measurement may include uptime of the server and or availibility.

    Quality must be understood and designed in.  Someone mentioned, the ability of the code to outlast the writer and be documented and understood, so as to be supportable in the long term future.  These are all measurements of quality.  You then build a process that ensures these objectives are understood, built in and measured.

    Remember, Japan did not overtake the American automobile manufactures because they could not define quality, they went to the customer, took surveys and built their process to meet and exceed the expectations of the customer!  Now cars are expected to last beyond 100,000 miles instead of 70,000!

  • Well, you can't control what you can't measure. And you can't measure software development, at least not with objective measures that are the normal basis for quality metrics. Therefore...

    Just some thoughts. Liberace said, "Without the business, there's no show." In other words, the real metric is customer satisfaction.

    Also, successful software is never finished. If a piece of software is successful, users will always think of new things that they want it to do. That's a good thing, not a bad thing.

  • Stephen Hirsch eruditely scribbled: "Also, successful software is never finished. If a piece of software is successful, users will always think of new things that they want it to do. That's a good thing, not a bad thing."

    Because Stephen is correct in principle, I will only quibble that a weak process that allows users to define those "new things" during the QA cycle for a release is A Bad Thing®.  Weak PMs (and other development managers) all too often allow themselves to be brow-beaten by executive management and/or QA into defining as defects what actually amount to new originally unwritten requirements...  I would put anything that was inadequately documented at the word "Go!" in this category.

  • How you define quality is important.  Others here have done a pretty good job of it.  I wanted to comment on what happens when it's not adequately defined.  Several years ago a supervisor came up with the idea of grading our performance based on the number of lines of code we (programmers) wrote.  That would be the most significant factor for scoring our annual performance review.  I told him that was fine with me and sent him a scrap of VB code.

    dim x as string

    x = "I"

    x = x & " "

    x = x & "g"

    ... x eventually equaled "I guess I will be getting a great performance review at the end of this year" or something to that effect.

    The point is that his only metric was useless by itself.  The same holds true for quality.  If you define quality incorrectly, you may force your staff to head down a path that actually leads to lower quality.

    r

  • Thank you David for your kind words...I guess that Good or Bad is like determining the direction that the Earth spins (i.e., clockwise/counterclockwise)--it depends on your perspective.

    Since feature creep always occurs, especially during user review, I think it's time to stop trying to build software according to the way the evil PMI thinks it should be built. If you follow their mindset, you'll create software that works according to spec, but that nobody uses, or that they use grudgingly. Users never know what they want until they have it...

    Of course, at some point, you need to shoot the programmer and formally release the software to production, but it should be understood that that's just a milestone, not the goal.

    2 tricks that I've learned over the years: one, design your system by creating prototypes using production data. Go backwards by creating the software first, then creating the design documents, then the functional specs for formal testing.

    The second goes to Robert's post: make sure your metaquality is good (that such an ugly word, it's almost poetic! ) In all seriousness, make sure that you are sure what good and bad really mean before you start the journey.

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply