SQLServerCentral Editorial

Measuring Effectiveness

,

I've never been the guy that produced code the fastest, or the most code in a day. I haven't been able to show that I work the most hours, or that I even keep great hours. I'm often late or erratic in the times I come and go from work. I don't show that I am closing tickets in consistent times as I often will be slow to close new issues. I haven't ever been the employee of the month.

What I have been is effective. I get things done. I leave the organization and the job better than when I found it. I work to deliver value for the salary I'm paid. That isn't something you can easily measure with many traditional metrics that people use, but there are ways to decide that I'm a good employee. It's a little ephemeral and sometimes uncomfortable to have an employee that doesn't seem to fit the same model as most other employees, but that doesn't mean you wouldn't want me working on your staff.

This article on measuring DevOps tries to explain how you can determine if your DevOps process is working, which is a similar ephemeral way of working. When we implement DevOps, we don't have large project plans, we don't look forward to completing a system. We undertake a set of work, knowing that we don't have an end date. We just keep doing work and getting things done that are needed. In this case, how can we measure if a project was on time, on budget, and finished?

I'd argue, as would the article, that those aren't good measurements. Those are the ways we would traditionally look at work, but in software, those aren't often the things we want to look at. If we deliver software that meets some requirements, but users struggle to use it or complain, is it done? Most management would call it done, just like the airlines mark the plane as leaving on time when it pushes back from the gate. Standing on the runway for 45 minutes doesn't count against that target, even though I'd call that leaving late.

The article talks about picking things that bother a manager or team members about the software or their process. Some of these are easy, like the time to deliver a new server or recover from a failure. Others that I've found to be useful are the time to assemble a release or deploy software to an environment. While we can sometimes play with what these numbers measure by moving to smaller set of changes or altering our process, we'll quickly find out if we have a lot of overhead in our work by adding these measurements.

What about the software itself? Can I actually compare the time to deliver report a vs. report b? One might be much more complex than the other. That's true, but we ought to average out delivery estimates over time, and we can certainly separate out query writing of the complex logic against the time required for formatting. In fact, it might be good to start to measure different parts of software delivery to find out if certain people are better at some parts, or if requirements from clients are causing unnecessary delays. I know I've certainly had some formatting complaints require more time to get right than the entire rest of the software.

It can be hard to develop metrics that have outcomes and are truly actionable, but it's a better way to determine if your team is improving. Relying on simple and traditional metrics is lazy, and allows for lots of argument and debate over whether a team is doing well or not. Tackling those specific items that are irritating to management or customers might take some work, but you'll end up with a list of things that can be targeted for improvement and show progress that actually means something.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating