• Some comments on the specific example in the article and on accountability in software development in general.

    First: The specific example. I'd say that Facebook is doing it completely wrong. If the website is disrupted and a post-deployment fix is required, the developer is NEVER to blame. The first to blame would either be the QA department for missing something during test, or management for not allowing sufficient time for testing.

    Second: Should we then say that Facebook is doing the right stuff at the wrong moment? In other words, should they, instead of assessing developers on bugs that brought the website down, assess developers on bugs found during QA? That would be less of a mistake - but I think it's still a mistake, for two reasons: (a) a bug in developed code is sometimes not the developers' but the designers' fault, and (b) as someone already mentioned, it can stimulate developers to program defensively, going over their code twenty times, running countless tests, etc - that would surely bring down the number of bugs, but it would also minimize the developers output.

    If you really want to measure developers, you first have to define what a good develoepr actually is. How will you measure my performance? By number of bugs? - I can code almost bug-free if you want to - if you don't mind that I take a few weeks for even the simplest program. By number of programs built in a week? I can give you tens, maybe hundreds, but don't expect the QA department to be happy. By lines of code? Come on, that's the worst metric of them all; I could build a macro to insert a line break after every keyword to give you lots more lines of code, but do you really want that?

    I think the best developer is the one who manages to strike a perfect balance between the two extremes of spending way too much time to minimize the chance of bugs or going so fast that the code will be littered with them. If I spend two days on an average program, then hand it to QA and start on the next, then return to the program once QA is done and fix the three bugs they found, I am probably more effective then when I spend seven days on the same program to ensure that QA finds nothing, or when I finish it in three hours and then have to go through seven cycles of bug fixing and returning it to QA before it's good.

    So how do you measure that? Counting the time from starting the first version until getting final signoff of QA might be a good idea, but that does not take into account that not all programs are equally complex. Also, some developers brong value to a team that would not be measured by this. How about a senior developer who spends two hours per day helping other developers solve complex issues? How about a developer who, instead of just coding what the design says, first takes the time to read it critically and ask the designer some questions? If those questions are all useless, it's a waste of time for both the designer and the developer, but I've also experienced (both as developer and as designer) that those critical questions saved a lot of time because they pointed out a flaw in the design. In that way, a developer can be very effective without ever coding a single line of code!

    Bottom line: measuring performance of a developer is hard, maybe even impossible. Measuring the time between getting the assignment and getting signoff from QA would be a fair start (if complexity is taken into account, for instance using function point analysis), but that does not measure other qualities the developer may have. If used as one element in assessing the developer, applied by a human who is also able to understand other qualities of the developer and circumstances that may have led to unusual low or high values, it can be a good element. But only then. Automated translation of this measure in salary adjustment would, for me, be a good reason to find a different company. Not because I would be paid bad (I am quite confident that I could work the system), but because I believe that being a good developer also involves being critical of the design before you start coding, also involves helping your colleagues if they are stuck and you happen to know your way around their problem, and also other stuff that may be even harder to measure.

    Apologies for the rant. I've gone much further off-topic than I intended.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/