What's Your Code Quality?

  • Comments posted to this topic are about the item What's Your Code Quality?

  • Well I personally use code smells to measure quality of code. To me one of the worst examples of bad code is when the same logic or even same piece of code is present several places.

    One of the worst things I've encountered was having the same logic in C# and in SQL, being used in different circumstances. That meant that if you wanted to change something you'd had to change it in at least two locations, and it was hard to ensure that all scenarios was covered.

  • WTFs/minute sounds like a good measurement. Exceptional code can be measured in days between WTFs, really good code can be measured in WTFs/hour and really poor code in WTFs/second.

  • Dennis Wagner-347763 (2/21/2011)


    WTFs/minute sounds like a good measurement. Exceptional code can be measured in days between WTFs, really good code can be measured in WTFs/hour and really poor code in WTFs/second.

    That's essentially just hertz. In theory you could then have GHz code - although in practice it will more likely be in µHz (mikrohertz).

  • You measure quality in code the same way you measure quality in science, through peer review.

    In .Net development (and I'm sure in other languages/platforms) we have static code analyzers that can help by pointing you towards potential problems. But the best way to ensure good quality code is through code reviews. These can be quick and informal like "Can you take a look at this before I check it in?", or they can be more formal where and reviews a piece of code that one person wrote.

    The group reviews can turn ugly, and that needs to be kept under control, but they can also be educational as everyone gets to see what other people are doing that's right and wrong, and hopefully clean up in their own code as a result.

  • Obviously my first post was just in jest.

    While peer review isn't really a true measurement because it's subjective (like judges in the Olympics -- yecch), it is a great way to bring a team together cohesively. The best team environment I ever worked in had multi-phased code reviews (algorithm, psuedo-code, development code, production code) by at least two team members -- and not always the same ones. We strived to achieve the following:

    1) Each piece of code did one thing and one thing only

    2) Each piece of code was similar in look and feel to all other code to make maintenance easier

    3) Never have more than one copy of code that does the same thing.

    We had one rule for code review -- keep your personal feelings about another person out of the room -- be objective. We delivered the project on time and with very little defects.

  • Most of these rules successfully separate gawdawfull code from adequate code. Separating adequate code from good or great code is a lot more difficult.

    How do you differentiate a technically proficient painting from a great one, or a decent piano from a great one, or a decent meal from a great one? Rules don't help so much there but we can often sense the difference when we encounter it. Alas the same thing happens with code.

    ...

    -- FORTRAN manual for Xerox Computers --

  • mhli (2/21/2011)


    Dennis Wagner-347763 (2/21/2011)


    WTFs/minute sounds like a good measurement. Exceptional code can be measured in days between WTFs, really good code can be measured in WTFs/hour and really poor code in WTFs/second.

    That's essentially just hertz. In theory you could then have GHz code - although in practice it will more likely be in µHz (mikrohertz).

    I think "hurtz" would be a better name. If you have mHurtz code then you've got problems, but GHurtz...now you're in serious pain.

    🙂

  • Most of these rules successfully separate gawdawfull code from adequate code. Separating adequate code from good or great code is a lot more difficult.

    Separating good from great code is certainly more difficult, but also less necessary.

    How do you differentiate a technically proficient painting from a great one, or a decent piano from a great one, or a decent meal from a great one? Rules don't help so much there but we can often sense the difference when we encounter it. Alas the same thing happens with code

    I disagree. Part of the reason that experts can distinguish between good and great is because they have a language to discuss the differences with. An amateur may know what they like and don't like but has difficulty describing the what and why of their opinion. To take the piano example, the amateur may be able to tell you that one piano sounds better to them, the expert will be able to talk about the nuances of the sound that make it sound better (harmonic range, sound board reverberation, etc.) Certainly, experts may disagree on the weight to give various aspects of an artistic endeavor, but that doesn't invalidate the process itself.

    Likewise, there are aspects of computer code that make one piece of code better than another, the lowest level distinction is, of course, works versus fails. Beyond that there are measures of maintainability, efficiency (O notation), reusability, coupling, cohesion, documentation, etc.

    In database design, the ability to add objects or elements without having to rework large swaths of existing code is certainly an aspect of the quality of the design that the beginner often fails to recognize but that the expert does.

    --

    JimFive

  • The problem with metrics is that most company's simply don't have an official standard. Ask ten different managers (in the same department) to examine a piece of code, and you'll get 10 different opinions about about how well it conforms to their personal standard. Most managers keep their standards only inside their head, so it even evolves over time. Just like food in a restraunt, the end users are the final judge of the quality of your code.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • The problem with metrics is that most company's simply don't have an official standard. Ask ten different managers (in the same department) to examine a piece of code, and you'll get 10 different opinions about about how well it conforms to their personal standard.

    That's not a problem with metrics, that's the problem of a lack of metrics.

    Just like food in a restraunt, the end users are the final judge of the quality of your code.

    See my previous post. All the user can tell you is whether it meets their expectations or not. The user does not know and cannot tell you the quality of the code itself. 6000 lines of unmaintainable spaghetti code is just as good to the user as 600 lines of well documented functional programming. The problem arises next year when the client wants a change and it takes 6 1/2 weeks to make a 30 minute change.

    --

    JimFive

    (Edited for formatting)

  • mhli (2/21/2011)


    Well I personally use code smells to measure quality of code. To me one of the worst examples of bad code is when the same logic or even same piece of code is present several places.

    One of the worst things I've encountered was having the same logic in C# and in SQL, being used in different circumstances. That meant that if you wanted to change something you'd had to change it in at least two locations, and it was hard to ensure that all scenarios was covered.

    we do something we call "optimisation stage" after we've got the whole project out.

    we use Code smell principle to optimise the code

    i think SQL is most ignored (and maybe difficult) when it comes examining code quality. usually there is nothing you can do.

  • If your tables and relationships are normalized, you don't use cursors, you leverage views and functions appropriately to reduce duplication without compromising performance, use standard naming conventions, and handle transactions and errors effectively, then that's the greater part writing T-SQL that doesn't smell.

    The most universal problem I've seen is SQL duplication or even objects that are obsolete and never used. The same smelly SQL statements are repeated scores of times across scores or even hundreds of stored procedures. Alternatively, if you have only a small number of views, you can quickly identify all the functional SQL and then fix a smelly problem in that one location. I've created complete data models for applications that involved only less than a dozen tables and stored procedures, and only a few thousand lines of T-SQL source, even though I tend to use CR/LF for every column and join. It's never ceased to amaze me how some people can end up with hundreds of tables and procedures. Invariably if you run a SQL Profiler trace or query the database management views, I'll see that only a small percentage of the objects are actually used.. ever.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply