SQLServerCentral Editorial

Database DevOps Metrics

,

The DORA organization is dedicated to helping others build software better and faster, at a higher quality, and in a way that is more efficient. They continue to compile and publish the Accelerate State of DevOps report every year, which is a fascinating read.

As a part of the report, they have identified four key metrics that identify high performing organizations in terms of software. These are divided into two areas: throughput and stability. Throughput measures are change lead time and deployment frequency. Stability measures are the change fail percentage and failed deployment recovery time.

For a long time, as I chatted with various people doing database work, it seemed that most people deployed relatively infrequently. They might deploy a couple times a week for software changes, but database changes were often less than once a week. There have always been people moving faster or slower, but that felt like the pace for a majority of people. These days, in the 2024-2025 timeframe, many people seem to be able to deploy database changes every week, often multiple times a week.

Lots of people have moved to more throughput, with more frequent deployments and less change lead time. Most of us can't get more work out of people, so if we deploy more often, their completed work gets released quicker. Those two metrics make some sense, and I think those are good measures, but not goals. What I find is that people often need to make changes quicker either to respond to changing needs of their organization or to fix bugs they've introduced. I wonder what the ratio is of the former to the latter? I suspect it might be less than one, if most of your deployments are fixing bugs. I don't mind deploying software quicker, but the design, modeling, and testing can't be shortened.

The stability metrics are often high for most people I speak with about deployments. I don't see a lot of failures at deployment time as code usually compiles and deploys. It's often a day (or week) later that someone notices the code doesn't do what they expect. Is that a deployment failure? I think not. What's the MTTR if it's fixed an hour after being reported a day after the deployment? Is the MTTR an hour or a day plus an hour? I don't know how these metrics apply to databases, especially if data gets mangled and has to be corrected manually over hours/days/weeks. Is that in the MTTR? Can you even track it?

Metrics are good ways to measure you progress or health, as long as the metric doesn't become the goal. I've run into a lot of customers using these metrics to measure their development, and it does help most for a period of time. Whether this continues to help them improve often depends on whether they keep focusing on their goals of delivering better quality software faster, or they focus on the metrics.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating