Click here to monitor SSC
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in
Home       Members    Calendar    Who's On

Add to briefcase 1234»»»

Being Responsible for Code Expand / Collapse
Posted Monday, April 30, 2012 12:08 AM



Group: Administrators
Last Login: Today @ 10:51 AM
Points: 34,169, Visits: 18,317
Comments posted to this topic are about the item Being Responsible for Code

Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1292273
Posted Monday, April 30, 2012 3:21 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, May 11, 2016 9:47 AM
Points: 2, Visits: 66
Hi, Would have thought the ‘tester’ should be more accountable for allowing bugs through rather than the programmer – metrics should be based on the bugs identified at testing stage not after release
Post #1292338
Posted Monday, April 30, 2012 4:56 AM


Group: General Forum Members
Last Login: Thursday, September 22, 2016 7:41 AM
Points: 19, Visits: 196
A. Hughes (4/30/2012)
Hi, Would have thought the ‘tester’ should be more accountable for allowing bugs through rather than the programmer – metrics should be based on the bugs identified at testing stage not after release

I quite agree - if a bug has got through to the live environment there has been a failure in the QA process, rather than just one individual.

Post #1292387
Posted Monday, April 30, 2012 5:13 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Wednesday, November 11, 2015 8:08 AM
Points: 42, Visits: 90
The best way to avoid any bugs is to not write any code...

Be careful what you incentivise.
Post #1292396
Posted Monday, April 30, 2012 6:47 AM


Group: General Forum Members
Last Login: Yesterday @ 6:12 AM
Points: 1,599, Visits: 2,549
I understand the point which is "to be accountable for your mistakes" and I praise this way of life but for several reasons I would not work for such company.

Over time I've seen several developer codes in SQL (and .Net in general) and with a rule like this, I would be force to be accountable for their code and would require an entire regression testing for a single fix every time. (Even changing a line text, some software I've seen was relying on text to execute different operations)

Only a few developers uses the "Coding defensively" method and most of the time they are pushed forward to implement so-so solution (that will do the work) but are not maintainable or worst have side-effects which are not easily spotted.

To my eyes, (it's a rough idea) I would prefer having a few top knowledgeable person who are dedicated to validate what ever code is generated (which require great knowledge of the overall application & the technology, yes this is rough), have co-worker check what's being done is working. (Unit testing could also be implemented)

Having a QA that test particular functionality, have QA that test regressions, have a series of automated tests that run with each build.

Finally business owner that need to test their toy.

But that chain is very, very costly, but you get what you paid for in the end...
Post #1292467
Posted Monday, April 30, 2012 6:55 AM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Wednesday, September 19, 2012 8:39 AM
Points: 595, Visits: 1,226
I would hope that similar metrics are tracked of code/peer reviewers and QA/UAT as well.

Also, I would hope that success is tracked and rewarded as well as failure. X-number of deployments without disruption should get some sort of incentive as well as X-number of deployment failures would send a developer through the trap door.

Converting oxygen into carbon dioxide, since 1955.

Post #1292474
Posted Monday, April 30, 2012 7:33 AM



Group: General Forum Members
Last Login: Today @ 9:36 AM
Points: 7,712, Visits: 11,063
Some comments on the specific example in the article and on accountability in software development in general.

First: The specific example. I'd say that Facebook is doing it completely wrong. If the website is disrupted and a post-deployment fix is required, the developer is NEVER to blame. The first to blame would either be the QA department for missing something during test, or management for not allowing sufficient time for testing.

Second: Should we then say that Facebook is doing the right stuff at the wrong moment? In other words, should they, instead of assessing developers on bugs that brought the website down, assess developers on bugs found during QA? That would be less of a mistake - but I think it's still a mistake, for two reasons: (a) a bug in developed code is sometimes not the developers' but the designers' fault, and (b) as someone already mentioned, it can stimulate developers to program defensively, going over their code twenty times, running countless tests, etc - that would surely bring down the number of bugs, but it would also minimize the developers output.

If you really want to measure developers, you first have to define what a good develoepr actually is. How will you measure my performance? By number of bugs? - I can code almost bug-free if you want to - if you don't mind that I take a few weeks for even the simplest program. By number of programs built in a week? I can give you tens, maybe hundreds, but don't expect the QA department to be happy. By lines of code? Come on, that's the worst metric of them all; I could build a macro to insert a line break after every keyword to give you lots more lines of code, but do you really want that?
I think the best developer is the one who manages to strike a perfect balance between the two extremes of spending way too much time to minimize the chance of bugs or going so fast that the code will be littered with them. If I spend two days on an average program, then hand it to QA and start on the next, then return to the program once QA is done and fix the three bugs they found, I am probably more effective then when I spend seven days on the same program to ensure that QA finds nothing, or when I finish it in three hours and then have to go through seven cycles of bug fixing and returning it to QA before it's good.

So how do you measure that? Counting the time from starting the first version until getting final signoff of QA might be a good idea, but that does not take into account that not all programs are equally complex. Also, some developers brong value to a team that would not be measured by this. How about a senior developer who spends two hours per day helping other developers solve complex issues? How about a developer who, instead of just coding what the design says, first takes the time to read it critically and ask the designer some questions? If those questions are all useless, it's a waste of time for both the designer and the developer, but I've also experienced (both as developer and as designer) that those critical questions saved a lot of time because they pointed out a flaw in the design. In that way, a developer can be very effective without ever coding a single line of code!

Bottom line: measuring performance of a developer is hard, maybe even impossible. Measuring the time between getting the assignment and getting signoff from QA would be a fair start (if complexity is taken into account, for instance using function point analysis), but that does not measure other qualities the developer may have. If used as one element in assessing the developer, applied by a human who is also able to understand other qualities of the developer and circumstances that may have led to unusual low or high values, it can be a good element. But only then. Automated translation of this measure in salary adjustment would, for me, be a good reason to find a different company. Not because I would be paid bad (I am quite confident that I could work the system), but because I believe that being a good developer also involves being critical of the design before you start coding, also involves helping your colleagues if they are stuck and you happen to know your way around their problem, and also other stuff that may be even harder to measure.

Apologies for the rant. I've gone much further off-topic than I intended.

Hugo Kornelis, SQL Server MVP
Visit my SQL Server blog:
Post #1292505
Posted Monday, April 30, 2012 7:35 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Saturday, June 6, 2015 9:29 AM
Points: 5, Visits: 29

You said: "Be careful what you incentivise."

You beat me to the punch on that comment.

If you have 2 developers and one writes 99% of the new and innovative code.

The other meanwhile basically cuts and pastes a way to a paycheck. Never writing anything new himself.

Who will release a bug first?

Who is more valuable to the organization?
Post #1292508
Posted Monday, April 30, 2012 7:53 AM
SSChasing Mays

SSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing Mays

Group: General Forum Members
Last Login: Friday, September 23, 2016 9:34 AM
Points: 652, Visits: 1,122
I want to first comment on the posts I have seen, then your article.

First, I understand what everyone is saying (so far) about this leading to lower output, and about testing. I have no argument with these comments at all.

As a confirmation of how this works, I once worked with someone who worked for "the phone company" writing code. Allegedly he would write about 10 lines of code per year or month, I don't recall which. Whatever it was, it was extremely low. He explained that the review process was so extensive, it would take 6 months for any changes to get approved! Sounds wrong, until you think about the impact of bringing down the phone system in the country. Even now with a few companies running it instead of just one like it was decades ago, I would think a change in one could cause issues in another company pretty easily. Right or wrong, when there are incentives to do things perfectly, and punishment for any errors, the quantity of the output will go down significantly.

Second, I like the idea of holding people accountable for their quality. Won't happen. Companies don't want to do so. There are a million reasons, a few of them are:

If you hold the worker responsible, do you hold the CEO responsible?
Other managers?
Since managers are tasked with far more work than they should be, they don't have time to look at who is at fault, and would simply blame the easiest scape goat.
Since employees would know about it, they would do everything they could to "hide" in the system. Wait, that happens now!
Unfair managers would find a way to blame those employees they don't like.

One of the biggest issues in the workplace today is that the best employees are not rewarded, the worst employees get the same raises, bonuses as everyone else. The reason is companies are afraid of lawsuits and don't want to manage their workforce. They may claim they do, evidence shows not.

So I am in favor of doing so, as soon as someone comes up with a fair way to evaluate quality, that also includes all other factors about what is required of an employee, and holds managers accountable for NOT doing their job, with penalties at least as harsh as those for non managers.

Post #1292527
Posted Monday, April 30, 2012 8:00 AM

Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Yesterday @ 9:55 AM
Points: 397, Visits: 1,135
I guess FB doesn't have any sort of QA or test methods in place. We used to have a programmer here that thought his stuff was fine because "it compiled" (his words).

Post #1292532
« Prev Topic | Next Topic »

Add to briefcase 1234»»»

Permissions Expand / Collapse